Professional Documents
Culture Documents
Q 1. Give examples of specific situations that would call for the following types of
research, explaining why – a) Exploratory research b) Descriptive research c) Diagnostic
research d) Evaluation research.
Ans.: Research may be classified crudely according to its major intent or the methods. According
to the intent, research may be classified as:
Basic (aka fundamental or pure) research is driven by a scientist's curiosity or interest in a
scientific question. The main motivation is to expand man's knowledge, not to create or invent
something. There is no obvious commercial value to the discoveries that result from basic
research.
For example, basic science investigations probe for answers to questions such as:
• How did the universe begin?
Most scientists believe that a basic, fundamental understanding of all branches of science is
needed in order for progress to take place. In other words, basic research lays down the
foundation for the applied science that follows. If basic work is done first, then applied spin-offs
often eventually result from this research. As Dr. George Smoot of LBNL says, "People cannot
foresee the future well enough to predict what's going to develop from basic research. If we only
did applied research, we would still be making better spears."
Applied research is designed to solve practical problems of the modern world, rather than to
acquire knowledge for knowledge's sake. One might say that the goal of the applied scientist is to
improve the human condition.
Some scientists feel that the time has come for a shift in emphasis away from purely basic
research and toward applied science. This trend, they feel, is necessitated by the problems
resulting from global overpopulation, pollution, and the overuse of the earth's natural resources.
Exploratory research provides insights into and comprehension of an issue or situation. It
should draw definitive conclusions only with extreme caution. Exploratory research is a type of
research conducted because a problem has not been clearly defined. Exploratory research helps
determine the best research design, data collection method and selection of subjects. Given its
fundamental nature, exploratory research often concludes that a perceived problem does not
actually exist.
Exploratory research often relies on secondary research such as reviewing available literature
and/or data, or qualitative approaches such as informal discussions with consumers, employees,
management or competitors, and more formal approaches through in-depth interviews, focus
groups, projective methods, case studies or pilot studies. The Internet allows for research
methods that are more interactive in nature: E.g., RSS feeds efficiently supply researchers with
up-to-date information; major search engine search results may be sent by email to researchers
by services such as Google Alerts; comprehensive search results are tracked over lengthy
periods of time by services such as Google Trends; and Web sites may be created to attract
worldwide feedback on any subject.
The results of exploratory research are not usually useful for decision-making by themselves, but
they can provide significant insight into a given situation. Although the results of qualitative
research can give some indication as to the "why", "how" and "when" something occurs, it cannot
tell us "how often" or "how many."
Exploratory research is not typically generalizable to the population at large.
A defining characteristic of causal research is the random assignment of participants to the
conditions of the experiment; e.g., an Experimental and a Control Condition... Such assignment
results in the groups being comparable at the beginning of the experiment. Any difference
between the groups at the end of the experiment is attributable to the manipulated variable.
Observational research typically looks for difference among "in-tact" defined groups. A common
example compares smokers and non-smokers with regard to health problems. Causal
conclusions can't be drawn from such a study because of other possible differences between the
groups; e.g., smokers may drink more alcohol than non-smokers. Other unknown differences
could exist as well. Hence, we may see a relation between smoking and health but a conclusion
that smoking is a cause would not be warranted in this situation. (Cp)
Descriptive research, also known as statistical research, describes data and characteristics
about the population or phenomenon being studied. Descriptive research answers the questions
who, what, where, when and how.
Although the data description is factual, accurate and systematic, the research cannot describe
what caused a situation. Thus, descriptive research cannot be used to create a causal
relationship, where one variable affects another. In other words, descriptive research can be said
to have a low requirement for internal validity.
The description is used for frequencies, averages and other statistical calculations. Often the best
approach, prior to writing descriptive research, is to conduct a survey investigation. Qualitative
research often has the aim of description and researchers may follow-up with examinations of
why the observations exist and what the implications of the findings are.
In short descriptive research deals with everything that can be counted and studied. But there are
always restrictions to that. Your research must have an impact to the life of the people around
you. For example, finding the most frequent disease that affects the children of a town. The
reader of the research will know what to do to prevent that disease thus; more people will live a
healthy life.
Diagnostic study: it is similar to descriptive study but with different focus. It is directed towards
discovering what is happening and what can be done about. It aims at identifying the causes of a
problem and the possible solutions for it. It may also be concerned with discovering and testing
whether certain variables are associated. This type of research requires prior knowledge of the
problem, its thorough formulation, clear-cut definition of the given population, adequate methods
for collecting accurate information, precise measurement of variables, statistical analysis and test
of significance.
Evaluation Studies: it is a type of applied research. It is made for assessing the effectiveness of
social or economic programmes implemented or for assessing the impact of development of the
project area. It is thus directed to assess or appraise the quality and quantity of an activity and its
performance and to specify its attributes and conditions required for its success. It is concerned
with causal relationships and is more actively guided by hypothesis. It is concerned also with
change over time.
Action research is a reflective process of progressive problem solving led by individuals working
with others in teams or as part of a "community of practice" to improve the way they address
issues and solve problems. Action research can also be undertaken by larger organizations or
institutions, assisted or guided by professional researchers, with the aim of improving their
strategies, practices, and knowledge of the environments within which they practice. As designers
and stakeholders, researchers work with others to propose a new course of action to help their
community improve its work practices (Center for Collaborative Action Research). Kurt Lewin,
then a professor at MIT, first coined the term “action research” in about 1944, and it appears in
his 1946 paper “Action Research and Minority Problems”. In that paper, he described action
research as “a comparative research on the conditions and effects of various forms of social
action and research leading to social action” that uses “a spiral of steps, each of which is
composed of a circle of planning, action, and fact-finding about the result of the action”.
Action research is an interactive inquiry process that balances problem solving actions
implemented in a collaborative context with data-driven collaborative analysis or research to
understand underlying causes enabling future predictions about personal and organizational
change (Reason & Bradbury, 2001). After six decades of action research development, many
methodologies have evolved that adjust the balance to focus more on the actions taken or more
on the research that results from the reflective understanding of the actions. This tension exists
between
● those that are more driven by the researcher’s agenda to those more driven by
participants;
• Those that are motivated primarily by instrumental goal attainment to those motivated
primarily by the aim of personal, organizational, or societal transformation; and
• 1st-, to 2nd-, to 3rd-person research, that is, my research on my own action, aimed
primarily at personal change; our research on our group (family/team), aimed
primarily at improving the group; and ‘scholarly’ research aimed primarily at
theoretical generalization and/or large scale change.
Action research challenges traditional social science, by moving beyond reflective knowledge
created by outside experts sampling variables to an active moment-to-moment theorizing, data
collecting, and inquiring occurring in the midst of emergent structure. “Knowledge is always
gained through action and for action. From this starting point, to question the validity of social
knowledge is to question, not how to develop a reflective science about action, but how to
develop genuinely well-informed action — how to conduct an action science” (Tolbert 2001).
Q 2.In the context of hypothesis testing, briefly explain the difference between a) Null and
alternative hypothesis b) Type 1 and type 2 error c) Two tailed and one tailed test d)
Parametric and non-parametric tests.
Ans.: Some basic concepts in the context of testing of hypotheses are explained below -
11) Null Hypotheses and Alternative Hypotheses: In the context of statistical analysis,
we often talk about null and alternative hypotheses. If we are to compare the
superiority of method A with that of method B and we proceed on the assumption that
both methods are equally good, then this assumption is termed as a null hypothesis.
On the other hand, if we think that method A is superior, then it is known as an
alternative hypothesis.
These are symbolically represented as:
Null hypothesis = H0 and Alternative hypothesis = Ha
Suppose we want to test the hypothesis that the population mean is equal to the hypothesized
mean (µ H0) = 100. Then we would say that the null hypothesis is that the population mean is
equal to the hypothesized mean 100 and symbolically we can express it as: H0: µ= µ H0=100
If our sample results do not support this null hypothesis, we should conclude that something else
is true. What we conclude rejecting the null hypothesis is known as an alternative hypothesis. If
we accept H0, then we are rejecting Ha and if we reject H0, then we are accepting Ha. For H0:
µ= µ H0=100, we may consider three possible alternative hypotheses as follows:
Ha: µ>µ H0 (The alternative hypothesis is that the population mean is greater than
100)
Ha: µ< µ H0 (The alternative hypothesis is that the population mean is less than 100)
The null hypotheses and the alternative hypotheses are chosen before the sample is drawn (the
researcher must avoid the error of deriving hypotheses from the data he collects and testing the
hypotheses from the same data). In the choice of null hypothesis, the following considerations are
usually kept in view:
1a. The alternative hypothesis is usually the one, which is to be proved, and the null
hypothesis is the one that is to be disproved. Thus a null hypothesis represents
the hypothesis we are trying to reject, while the alternative hypothesis represents
all other possibilities.
2b. If the rejection of a certain hypothesis when it is actually true involves great risk, it
is taken as null hypothesis, because then the probability of rejecting it when it is
true is α (the level of significance) which is chosen very small.
3c. The null hypothesis should always be a specific hypothesis i.e., it should not state
an approximate value.
Generally, in hypothesis testing, we proceed on the basis of the null hypothesis, keeping the
alternative hypothesis in view. Why so? The answer is that on the assumption that the null
hypothesis is true, one can assign the probabilities to different possible sample results, but this
cannot be done if we proceed with alternative hypotheses. Hence the use of null hypotheses (at
times also known as statistical hypotheses) is quite frequent.
12) The Level of Significance: This is a very important concept in the context of
hypothesis testing. It is always some percentage (usually 5%), which should be
chosen with great care, thought and reason. In case we take the significance
level at 5%, then this implies that H0 will be rejected when the sampling result
(i.e., observed evidence) has a less than 0.05 probability of occurring if H0 is
true. In other words, the 5% level of significance means that the researcher is
willing to take as much as 5% risk rejecting the null hypothesis when it (H0)
happens to be true. Thus the significance level is the maximum value of the
probability of rejecting H0 when it is true and is usually determined in advance
before testing the hypothesis.
23) Decision Rule or Test of Hypotheses: Given a hypothesis Ha and an
alternative hypothesis H0, we make a rule, which is known as a decision rule,
according to which we accept H0 (i.e., reject Ha) or reject H0 (i.e., accept Ha).
For instance, if H0 is that a certain lot is good (there are very few defective items
in it), against Ha, that the lot is not good (there are many defective items in it),
then we must decide the number of items to be tested and the criterion for
accepting or rejecting the hypothesis. We might test 10 items in the lot and plan
our decision saying that if there are none or only 1 defective item among the 10,
we will accept H0; otherwise we will reject H0 (or accept Ha). This sort of basis is
known as a decision rule.
34) Type I & II Errors: In the context of testing of hypotheses, there are basically two
types of errors that we can make. We may reject H0 when H0 is true and we may
accept H0 when it is not true. The former is known as Type I and the latter is
known as Type II. In other words, Type I error means rejection of hypotheses,
which should have been accepted, and Type II error means accepting of
hypotheses, which should have been rejected. Type I error is denoted by α
(alpha), also called as level of significance of test; and Type II error is denoted by
β(beta).
Decision
Accept H0 Reject H0
H0 (true) Correct decision Type I error (α error)
Ho (false) Type II error (β error) Correct decision
The probability of Type I error is usually determined in advance and is understood as the level of
significance of testing the hypotheses. If type I error is fixed at 5%, it means there are about 5
chances in 100 that we will reject H0 when H0 is true. We can control type I error just by fixing it
at a lower level. For instance, if we fix it at 1%, we will say that the maximum probability of
committing type I error would only be 0.01.
But with a fixed sample size n, when we try to reduce type I error, the probability of committing
type II error increases. Both types of errors cannot be reduced simultaneously, since there is a
trade-off in business situations. Decision makers decide the appropriate level of type I error by
examining the costs of penalties attached to both types of errors. If type I error involves time and
trouble of reworking a batch of chemicals that should have been accepted, whereas type II error
means taking a chance that an entire group of users of this chemicals compound will be
poisoned, then in such a situation one should prefer a type I error to a type II error. As a result,
one must set a very high level for type I error in one’s testing techniques of a given hypothesis.
Hence, in testing of hypotheses, one must make all possible efforts to strike an adequate balance
between Type I & Type II error.
15) Two Tailed Test & One Tailed Test: In the context of hypothesis testing, these two terms
are quite important and must be clearly understood. A two-tailed test rejects the null hypothesis if,
say, the sample mean is significantly higher or lower than the hypothesized value of the mean of
the population. Such a test is inappropriate when we have H0: µ= µ H0 and Ha: µ≠µ H0 which
may µ>µ H0 or µ<µ H0. If significance level is 5 % and the two-tailed test is to be applied, the
probability of the rejection area will be 0.05 (equally split on both tails of the curve as 0.025) and
that of the acceptance region will be 0.95. If we take µ = 100 and if our sample mean deviates
significantly from µ, in that case we shall accept the null hypothesis. But there are situations when
only a one-tailed test is considered appropriate. A one-tailed test would be used when we are to
test, say, whether the population mean is either lower or higher than some hypothesized value.
Parametric statistics is a branch of statistics that assumes data come from a type of probability
distribution and makes inferences about the parameters of the distribution most well known
elementary statistical methods are parametric.
Generally speaking parametric methods make more assumptions than non-parametric
methods. If those extra assumptions are correct, parametric methods can produce more accurate
and precise estimates. They are said to have more statistical power. However, if those
assumptions are incorrect, parametric methods can be very misleading. For that reason they are
often not considered robust. On the other hand, parametric formulae are often simpler to write
down and faster to compute. In some, but definitely not all cases, their simplicity makes up for
their non-robustness, especially if care is taken to examine diagnostic statistics.
Because parametric statistics require a probability distribution, they are not distribution-free.
Non-parametric models differ from parametric models in that the model structure is not
specified a priori but is instead determined from data. The term nonparametric is not meant to
imply that such models completely lack parameters but that the number and nature of the
parameters are flexible and not fixed in advance.
Kernel density estimation provides better estimates of the density than histograms.
Nonparametric regression and semi parametric regression methods have been developed based
on kernels, splines, and wavelets.
Data Envelopment Analysis provides efficiency coefficients similar to those obtained
by Multivariate Analysis without any distributional assumption.
Q 3. Explain the difference between a causal relationship and correlation, with an example
of each. What are the possible reasons for a correlation between two variables?
Ans.: Correlation: The correlation is knowing what the consumer wants, and providing it.
Marketing research looks at trends in sales and studies all of the variables, i.e. price, color,
availability, and styles, and the best way to give the customer what he or she wants. If you can
give the customer what they want, they will buy, and let friends and family know where they got it.
Making them happy makes the money.
Casual relationship Marketing was first defined as a form of marketing developed from direct
response marketing campaigns, which emphasizes customer retention and satisfaction, rather
than a dominant focus on sales transactions.
As a practice, Relationship Marketing differs from other forms of marketing in that it recognizes
the long term value of customer relationships and extends communication beyond intrusive
advertising and sales promotional messages.
With the growth of the internet and mobile platforms, Relationship Marketing has continued to
evolve and move forward as technology opens more collaborative and social communication
channels. This includes tools for managing relationships with customers that goes beyond simple
demographic and customer service data. Relationship Marketing extends to include Inbound
Marketing efforts (a combination of search optimization and Strategic Content), PR, Social Media
and Application Development.
Reasons for a correlation between two variables: Chance association, (the relationship is due
to chance) or causative association (one variable causes the other).
The information given by a correlation coefficient is not enough to define the dependence
structure between random variables. The correlation coefficient completely defines the
dependence structure only in very particular cases, for example when the distribution is a
multivariate normal distribution. (See diagram above.) In the case of elliptic distributions it
characterizes the (hyper-)ellipses of equal density, however, it does not completely characterize
the dependence structure (for example, a multivariate t-distribution's degrees of freedom
determine the level of tail dependence).
Distance correlation and Brownian covariance / Brownian correlation [8][9] were introduced to
address the deficiency of Pearson's correlation that it can be zero for dependent random
variables; zero distance correlation and zero Brownian correlation imply independence.
The correlation ratio is able to detect almost any functional dependency, or the entropy-based
mutual information/total correlation which is capable of detecting even more general
dependencies. The latter are sometimes referred to as multi-moment correlation measures, in
comparison to those that consider only 2nd moment (pairwise or quadratic) dependence.
The polychoric correlation is another correlation applied to ordinal data that aims to estimate the
correlation between theorised latent variables.
One way to capture a more complete view of dependence structure is to consider a copula
between them.
Q 4. Briefly explain any two factors that affect the choice of a sampling technique. What are the
characteristics of a good sample?
Ans.: The difference between non-probability and probability sampling is that non-probability
sampling does not involve random selection and probability sampling does. Does that mean that
non-probability samples aren't representative of the population? Not necessarily. But it does
mean that non-probability samples cannot depend upon the rationale of probability theory. At
least with a probabilistic sample, we know the odds or probability that we have represented the
population well. We are able to estimate confidence intervals for the statistic. With non-probability
samples, we may or may not represent the population well, and it will often be hard for us to know
how well we've done so. In general, researchers prefer probabilistic or random sampling methods
over non probabilistic ones, and consider them to be more accurate and rigorous. However, in
applied social research there may be circumstances where it is not feasible, practical or
theoretically sensible to do random sampling. Here, we consider a wide range of non-probabilistic
alternatives.
Most sampling methods are purposive in nature because we usually approach the
sampling problem with a specific plan in mind. The most important distinctions among these types
of sampling methods are the ones between the different types of purposive sampling approaches.
• Expert Sampling
Expert sampling involves the assembling of a sample of persons with known or demonstrable
experience and expertise in some area. Often, we convene such a sample under the auspices of
a "panel of experts." There are actually two reasons you might do expert sampling. First, because
it would be the best way to elicit the views of persons who have specific expertise. In this case,
expert sampling is essentially just a specific sub case of purposive sampling. But the other reason
you might use expert sampling is to provide evidence for the validity of another sampling
approach you've chosen. For instance, let's say you do modal instance sampling and are
concerned that the criteria you used for defining the modal instance are subject to criticism. You
might convene an expert panel consisting of persons with acknowledged experience and insight
into that field or topic and ask them to examine your modal definitions and comment on their
appropriateness and validity. The advantage of doing this is that you aren't out on your own trying
to defend your decisions -- you have some acknowledged experts to back you. The disadvantage
is that even the experts can be, and often are, wrong.
• Quota Sampling
In quota sampling, you select people non-randomly according to some fixed quota. There are two
types of quota sampling: proportional and non proportional. In proportional quota sampling you
want to represent the major characteristics of the population by sampling a proportional amount
of each. For instance, if you know the population has 40% women and 60% men, and that you
want a total sample size of 100, you will continue sampling until you get those percentages and
then you will stop. So, if you've already got the 40 women for your sample, but not the sixty men,
you will continue to sample men but even if legitimate women respondents come along, you will
not sample them because you have already "met your quota." The problem here (as in much
purposive sampling) is that you have to decide the specific characteristics on which you will base
the quota. Will it be by gender, age, education race, religion, etc.?
Non-proportional quota sampling is a bit less restrictive. In this method, you specify the
minimum number of sampled units you want in each category. Here, you're not concerned with
having numbers that match the proportions in the population. Instead, you simply want to have
enough to assure that you will be able to talk about even small groups in the population. This
method is the non-probabilistic analogue of stratified random sampling in that it is typically used
to assure that smaller groups are adequately represented in your sample.
• Heterogeneity Sampling
We sample for heterogeneity when we want to include all opinions or views, and we aren't
concerned about representing these views proportionately. Another term for this is sampling for
diversity. In many brainstorming or nominal group processes (including concept mapping), we
would use some form of heterogeneity sampling because our primary interest is in getting broad
spectrum of ideas, not identifying the "average" or "modal instance" ones. In effect, what we
would like to be sampling is not people, but ideas. We imagine that there is a universe of all
possible ideas relevant to some topic and that we want to sample this population, not the
population of people who have the ideas. Clearly, in order to get all of the ideas, and especially
the "outlier" or unusual ones, we have to include a broad and diverse range of participants.
Heterogeneity sampling is, in this sense, almost the opposite of modal instance sampling.
• Snowball Sampling
In snowball sampling, you begin by identifying someone who meets the criteria for inclusion in
your study. You then ask them to recommend others who they may know who also meet the
criteria. Although this method would hardly lead to representative samples, there are times when
it may be the best method available. Snowball sampling is especially useful when you are trying
to reach populations that are inaccessible or hard to find. For instance, if you are studying the
homeless, you are not likely to be able to find good lists of homeless people within a specific
geographical area. However, if you go to that area and identify one or two, you may find that they
know very well whom the other homeless people in their vicinity are and how you can find them.
Characteristics of good Sample: The decision process is a complicated one. The researcher
has to first identify the limiting factor or factors and must judiciously balance the conflicting
factors. The various criteria governing the choice of the sampling technique are:
11. Purpose of the Survey: What does the researcher aim at? If he intends to
generalize the findings based on the sample survey to the population, then an
appropriate probability sampling method must be selected. The choice of a particular
type of probability sampling depends on the geographical area of the survey and the
size and the nature of the population under study.
22.Measurability: The application of statistical inference theory requires computation of
the sampling error from the sample itself. Only probability samples allow such
computation. Hence, where the research objective requires statistical inference, the
sample should be drawn by applying simple random sampling method or stratified
random sampling method, depending on whether the population is homogenous or
heterogeneous.
33.Degree of Precision: Should the results of the survey be very precise, or could even
rough results serve the purpose? The desired level of precision is one of the criteria
for sampling method selection. Where a high degree of precision of results is desired,
probability sampling should be used. Where even crude results would serve the
purpose (E.g., marketing surveys, readership surveys etc), any convenient non-
random sampling like quota sampling would be enough.
44. Information about Population: How much information is available about the
population to be studied? Where no list of population and no information about its
nature are available, it is difficult to apply a probability sampling method. Then an
exploratory study with non-probability sampling may be done to gain a better idea of
the population. After gaining sufficient knowledge about the population through the
exploratory study, an appropriate probability sampling design may be adopted.
55. The Nature of the Population: In terms of the variables to be studied, is the
population homogenous or heterogeneous? In the case of a homogenous population,
even simple random sampling will give a representative sample. If the population is
heterogeneous, stratified random sampling is appropriate.
66. Geographical Area of the Study and the Size of the Population: If the area
covered by a survey is very large and the size of the population is quite large, multi-
stage cluster sampling would be appropriate. But if the area and the size of the
population are small, single stage probability sampling methods could be used.
77. Financial Resources: If the available finance is limited, it may become necessary to
choose a less costly sampling plan like multistage cluster sampling, or even quota
sampling as a compromise. However, if the objectives of the study and the desired
level of precision cannot be attained within the stipulated budget, there is no
alternative but to give up the proposed survey. Where the finance is not a constraint,
a researcher can choose the most appropriate method of sampling that fits the
research objective and the nature of population.
88. Time Limitation: The time limit within which the research project should be
completed restricts the choice of a sampling method. Then, as a compromise, it may
become necessary to choose less time consuming methods like simple random
sampling, instead of stratified sampling/sampling with probability proportional to size;
or multi-stage cluster sampling, instead of single-stage sampling of elements. Of
course, the precision has to be sacrificed to some extent.
99. Economy: It should be another criterion in choosing the sampling method. It means
achieving the desired level of precision at minimum cost. A sample is economical if
the precision per unit cost is high, or the cost per unit of variance is low. The above
criteria frequently conflict with each other and the researcher must balance and blend
them to obtain a good sampling plan. The chosen plan thus represents an adaptation
of the sampling theory to the available facilities and resources. That is, it represents a
compromise between idealism and feasibility. One should use simple workable
methods, instead of unduly elaborate and complicated techniques.
Q 5. Select any topic for research and explain how you will use both secondary and
primary sources to gather the required information.
The response rate in mail surveys is generally very low in developing countries like India. Certain
techniques have to be adopted to increase the response rate. They are:
11. Quality printing: The questionnaire may be neatly printed on quality light colored paper,
so as to attract the attention of the respondent.
22. Covering letter: The covering letter should be couched in a pleasant style, so as to
attract and hold the interest of the respondent. It must anticipate objections and answer
them briefly. It is desirable to address the respondent by name.
33. Advance information: Advance information can be provided to potential respondents by
a telephone call, or advance notice in the newsletter of the concerned organization, or by
a letter. Such preliminary contact with potential respondents is more successful than
follow-up efforts.
44. Incentives: Money, stamps for collection and other incentives are also used to induce
respondents to complete and return the mail questionnaire.
55. Follow-up-contacts: In the case of respondents belonging to an organization, they may
be approached through someone in that organization known as the researcher.
66. Larger sample size: A larger sample may be drawn than the estimated sample size. For
example, if the required sample size is 1000, a sample of 1500 may be drawn. This may
help the researcher to secure an effective sample size closer to the required size.
7
8Q 6. Case Study: You are engaged to carry out a market survey on behalf of a leading
Newspaper that is keen to increase its circulation in Bangalore City, in order to
ascertain reader habits and interests. Develop a title for the study; define the
research problem and the objectives or questions to be answered by the study.
Research problem: A research problem is the situation that causes the researcher to feel
apprehensive, confused and ill at ease. It is the demarcation of a problem area within a certain
context involving the WHO or WHAT, the WHERE, the WHEN and the WHY of the problem
situation.
There are many problem situations that may give rise to research. Three sources usually
contribute to problem identification. Own experience or the experience of others may be a source
of problem supply. A second source could be scientific literature. You may read about certain
findings and notice that a certain field was not covered. This could lead to a research problem.
Theories could be a third source. Shortcomings in theories could be researched.
Types of questions to be asked :For more than 35 years, the news about newspapers and
young readers has been mostly bad for the newspaper industry. Long before any competition
from cable television or Nintendo, American newspaper publishers were worrying about declining
readership among the young.
As early as 1960, at least 20 years prior to Music Television (MTV) or the Internet, media
research scholars1 began to focus their studies on young adult readers' decreasing interest in
newspaper content. The concern over a declining youth market preceded and perhaps
foreshadowed today's fretting over market penetration. Even where circulation has grown or
stayed stable, there is rising concern over penetration, defined as the percentage of occupied
households in a geographic market that are served by a newspaper.2 Simply put, population
growth is occurring more rapidly than newspaper readership in most communities.
This study looks at trends in newspaper readership among the 18-to-34 age group and examines
some of the choices young adults make when reading newspapers.
One of the underlying concerns behind the decline in youth newspaper reading is the question of
how young people view the newspaper. A number of studies explored how young readers
evaluate and use newspaper content.
Comparing reader content preferences over a 10-year period, Gerald Stone and Timothy
Boudreau found differences between readers ages 18-34 and those 35-plus.16 Younger readers
showed increased interest in national news, weather, sports, and classified advertisements over
the decade between 1984 and 1994, while older readers ranked weather, editorials, and food
advertisements higher. Interest in international news and letters to the editor was less among
younger readers, while older readers showed less interest in reports of births, obituaries, and
marriages.
In an exploration of leisure reading among college students, Leo Jeffres and Atkin assessed
dimensions of interest in newspapers, magazines, and books,19 exploring the influence of media
use, non-media leisure, and academic major on newspaper content preferences. The study
discovered that overall newspaper readership was positively related to students' focus on
entertainment, job / travel information, and public affairs. However, the students' preference for
reading as a leisure-time activity was related only to a public affairs focus. Content preferences
for newspapers and other print media were related. The researchers found no significant
differences in readership among various academic majors, or by gender, though there was a
slight correlation between age and the public affairs readership index, with older readers more
interested in news about public affairs.
Methodology
Sample
Participants in this study (N=267) were students enrolled in 100- and 200-level English courses at
a midwestern public university. Courses that comprise the framework for this sample were
selected because they could fulfill basic studies requirements for all majors. A basic studies
course is one that is listed within the core curriculum required for all students. The researcher
obtained permission from seven professors to distribute questionnaires in the eight classes during
regularly scheduled class periods. The students' participation was voluntary; two students
declined. The goal of this sampling procedure was to reach a cross-section of students
representing various fields of study. In all, 53 majors were represented.
Of the 267 students who participated in the study, 65 (24.3 percent) were male and 177 (66.3
percent) were female. A total of 25 participants chose not to divulge their genders. Ages ranged
from 17 to 56, with a mean age of 23.6 years. This mean does not include the 32 respondents
who declined to give their ages. A total of 157 participants (58.8 percent) said they were of the
Caucasian race, 59 (22.1 percent) African American, 10 (3.8 percent) Asian, five (1.9 percent)
African/Native American, two (.8 percent) Hispanic, two (.8 percent) Native American, and one (.4
percent) Arabic. Most (214) of the students were enrolled full time, whereas a few (28) were part-
time students. The class rank breakdown was: freshmen, 45 (16.9 percent); sophomores, 15 (5.6
percent); juniors, 33 (12.4 percent); seniors, 133 (49.8 percent); and graduate students, 16 (6
percent).
Procedure
After two pre-tests and revisions, questionnaires were distributed and collected by the
investigator. In each of the eight classes, the researcher introduced herself to the students as a
journalism professor who was conducting a study on students' use of newspapers and other
media. Each questionnaire included a cover letter with the researcher's name, address, and
phone number. The researcher provided pencils and was available to answer questions if anyone
needed further assistance. The average time spent on the questionnaires was 20 minutes, with
some individual students taking as long as an hour. Approximately six students asked to take the
questionnaires home to finish. They returned the questionnaires to the researcher's mailbox
within a couple of day.
Assignment Set- 2
Ans.: There are some alternative methods of distributing questionnaires to the respondents.
They are:
1) Personal delivery,
2) Attaching the questionnaire to a product,
3) Advertising the questionnaire in a newspaper or magazine, and
4) News-stand inserts.
Personal delivery: The researcher or his assistant may deliver the questionnaires to the
potential respondents, with a request to complete them at their convenience. After a day or two,
the completed questionnaires can be collected from them. Often referred to as the self-
administered questionnaire method, it combines the advantages of the personal interview and the
mail survey. Alternatively, the questionnaires may be delivered in person and the respondents
may return the completed questionnaires through mail.
Attaching questionnaire to a product: A firm test marketing a product may attach a
questionnaire to a product and request the buyer to complete it and mail it back to the firm. A gift
or a discount coupon usually rewards the respondent.
Advertising the questionnaire: The questionnaire with the instructions for completion may be
advertised on a page of a magazine or in a section of newspapers. The potential respondent
completes it, tears it out and mails it to the advertiser. For example, the committee of Banks
Customer Services used this method for collecting information from the customers of commercial
banks in India. This method may be useful for large-scale studies on topics of common interest.
Newsstand inserts: This method involves inserting the covering letter, questionnaire and self
addressed reply-paid envelope into a random sample of newsstand copies of a newspaper or
magazine.
Advantages and Disadvantages:
The advantages of Questionnaire are:
this method facilitates collection of more accurate data for longitudinal studies than any other
method, because under this method, the event or action is reported soon after its occurrence.
this method makes it possible to have before and after designs made for field based studies.
For example, the effect of public relations or advertising campaigns or welfare measures can be
measured by collecting data before, during and after the campaign.
the panel method offers a good way of studying trends in events, behavior or attitudes. For
example, a panel enables a market researcher to study how brand preferences change from
month to month; it enables an economics researcher to study how employment, income and
expenditure of agricultural laborers change from month to month; a political scientist can study
the shifts in inclinations of voters and the causative influential factors during an election. It is also
possible to find out how the constituency of the various economic and social strata of society
changes through time and so on.
A panel study also provides evidence on the causal relationship between variables. For
example, a cross sectional study of employees may show an association between their attitude to
their jobs and their positions in the organization, but it does not indicate as to which comes first -
favorable attitude or promotion. A panel study can provide data for finding an answer to this
question.
It facilities depth interviewing, because panel members become well acquainted with the field
workers and will be willing to allow probing interviews.
been in operation for some time. Cheating by panel members or investigators may be a
Q 2. In processing data, what is the difference between measures of central tendency and
measures of dispersion? What is the most important measure of central tendency and
dispersion?
Measures of Dispersion: A measure of statistical dispersion is a real number that is zero if all
the data are identical, and increases as the data becomes more diverse. It cannot be less than
zero.
Most measures of dispersion have the same scale as the quantity being measured. In other
words, if the measurements have units, such as metres or seconds, the measure of dispersion
has the same units. Such measures of dispersion include:
• Standard deviation
• Interquartile range
• Range
• Mean difference
• Median absolute deviation
• Average absolute deviation (or simply called average deviation)
• Distance standard deviation
These are frequently used (together with scale factors) as estimators of scale parameters, in
which capacity they are called estimates of scale.
All the above measures of statistical dispersion have the useful property that they are location-
invariant, as well as linear in scale. So if a random variable X has a dispersion of SX then a linear
transformation Y = aX + b for real a and b should have dispersion SY = |a|SX.
Other measures of dispersion are dimensionless (scale-free). In other words, they have no
units even if the variable itself has units. These include:
• Coefficient of variation
• Quartile coefficient of dispersion
• Relative mean difference, equal to twice the Gini coefficient
• Variance (the square of the standard deviation) — location-invariant but not linear in
scale.
• Variance-to-mean ratio — mostly used for count data when the term coefficient of
dispersion is used and when this ratio is dimensionless, as count data are themselves
dimensionless: otherwise this is not scale-free.
Some measures of dispersion have specialized purposes, among them the Allan variance and
the Hadamard variance.
For categorical variables, it is less common to measure dispersion by a single number. See
qualitative variation. One measure that does so is the discrete entropy.
Sources of statistical dispersion
In the physical sciences, such variability may result only from random measurement errors:
instrument measurements are often not perfectly precise, i.e., reproducible. One may assume
that the quantity being measured is unchanging and stable, and that the variation between
measurements is due to observational error.
In the biological sciences, this assumption is false: the variation observed might be intrinsic to the
phenomenon: distinct members of a population differ greatly. This is also seen in the arena of
manufactured products; even there, the meticulous scientist finds variation.The simple model of a
stable quantity is preferred when it is tenable. Each phenomenon must be examined to see if it
warrants such a simplification.
Q 3. What are the characteristics of a good research design? Explain how the research
design for exploratory studies is different from the research design for descriptive and
diagnostic studies.
In most social research the third condition is the most difficult to meet. Any number of factors
other than the treatment or program could cause changes in outcome measures. Campbell and
Stanley (1966) and later, Cook and Campbell (1979) list a number of common plausible
alternative explanations (or, threats to internal validity). For example, it may be that some
historical event which occurs at the same time that the program or treatment is instituted was
responsible for the change in the outcome measures; or, changes in record keeping or
measurement systems which occur at the same time as the program might be falsely attributed to
the program. The reader is referred to standard research methods texts for more detailed
discussions of threats to validity.
This paper is primarily heuristic in purpose. Standard social science methodology textbooks
(Cook and Campbell 1979; Judd and Kenny, 1981) typically present an array of research designs
and the alternative explanations, which these designs rule out or minimize. This tends to foster a
"cookbook" approach to research design - an emphasis on the selection of an available design
rather than on the construction of an appropriate research strategy. While standard designs may
sometimes fit real-life situations, it will often be necessary to "tailor" a research design to
minimize specific threats to validity. Furthermore, even if standard textbook designs are used, an
understanding of the logic of design construction in general will improve the comprehension of
these standard approaches. This paper takes a structural approach to research design. While this
is by no means the only strategy for constructing research designs, it helps to clarify some of the
basic principles of design logic.
Good research designs minimize the plausible alternative explanations for the hypothesized
cause-effect relationship. But such explanations may be ruled out or minimized in a number of
ways other than by design. The discussion, which follows, outlines five ways to minimize threats
to validity, one of which is by research design:
1. By Argument. The most straightforward way to rule out a potential threat to validity is to
simply argue that the threat in question is not a reasonable one. Such an argument may
be made either a priori or a posteriori, although the former will usually be more
convincing than the latter. For example, depending on the situation, one might argue that
an instrumentation threat is not likely because the same test is used for pre and post test
measurements and did not involve observers who might improve, or other such factors.
In most cases, ruling out a potential threat to validity by argument alone will be weaker
than the other approaches listed below. As a result, the most plausible threats in a study
should not, except in unusual cases, be ruled out by argument only.
2. By Measurement or Observation. In some cases it will be possible to rule out a threat
by measuring it and demonstrating that either it does not occur at all or occurs so
minimally as to not be a strong alternative explanation for the cause-effect relationship.
Consider, for example, a study of the effects of an advertising campaign on subsequent
sales of a particular product. In such a study, history (i.e., the occurrence of other events
which might lead to an increased desire to purchase the product) would be a plausible
alternative explanation. For example, a change in the local economy, the removal of a
competing product from the market, or similar events could cause an increase in product
sales. One might attempt to minimize such threats by measuring local economic
indicators and the availability and sales of competing products. If there is no change in
these measures coincident with the onset of the advertising campaign, these threats
would be considerably minimized. Similarly, if one is studying the effects of special
mathematics training on math achievement scores of children, it might be useful to
observe everyday classroom behavior in order to verify that students were not receiving
any additional math training to that provided in the study.
3. By Design. Here, the major emphasis is on ruling out alternative explanations by adding
treatment or control groups, waves of measurement, and the like. This topic will be
discussed in more detail below.
4. By Analysis. There are a number of ways to rule out alternative explanations using
statistical analysis. One interesting example is provided by Jurs and Glass (1971). They
suggest that one could study the plausibility of an attrition or mortality threat by
conducting a two-way analysis of variance. One factor in this study would be the original
treatment group designations (i.e., program vs. comparison group), while the other factor
would be attrition (i.e., dropout vs. non-dropout group). The dependent measure could be
the pretest or other available pre-program measures. A main effect on the attrition factor
would be indicative of a threat to external validity or generalizability, while an interaction
between group and attrition factors would point to a possible threat to internal validity.
Where both effects occur, it is reasonable to infer that there is a threat to both internal
and external validity.
5. By Preventive Action. When potential threats are anticipated some type of preventive
action can often rule them out. For example, if the program is a desirable one, it is likely
that the comparison group would feel jealous or demoralized. Several actions can be
taken to minimize the effects of these attitudes including offering the program to the
comparison group upon completion of the study or using program and comparison
groups which have little opportunity for contact and communication. In addition, auditing
methods and quality control can be used to track potential experimental dropouts or to
insure the standardization of measurement.
The five categories listed above should not be considered mutually exclusive. The inclusion of
measurements designed to minimize threats to validity will obviously be related to the design
structure and is likely to be a factor in the analysis. A good research plan should, where possible.
make use of multiple methods for reducing threats. In general, reducing a particular threat by
design or preventive action will probably be stronger than by using one of the other three
approaches. The choice of which strategy to use for any particular threat is complex and depends
at least on the cost of the strategy and on the potential seriousness of the threat.
Design Construction
Basic Design Elements. Most research designs can be constructed from four basic elements:
1. Time. A causal relationship, by its very nature, implies that some time has elapsed
between the occurrence of the cause and the consequent effect. While for some
phenomena the elapsed time might be measured in microseconds and therefore might be
unnoticeable to a casual observer, we normally assume that the cause and effect in
social science arenas do not occur simultaneously, In design notation we indicate this
temporal element horizontally - whatever symbol is used to indicate the presumed cause
would be placed to the left of the symbol indicating measurement of the effect. Thus, as
we read from left to right in design notation we are reading across time. Complex designs
might involve a lengthy sequence of observations and programs or treatments across
time.
2. Program(s) or Treatment(s). The presumed cause may be a program or treatment
under the explicit control of the researcher or the occurrence of some natural event or
program not explicitly controlled. In design notation we usually depict a presumed cause
with the symbol "X". When multiple programs or treatments are being studied using the
same design, we can keep the programs distinct by using subscripts such as "X1" or "X2".
For a comparison group (i.e., one which does not receive the program under study) no
"X" is used.
3. Observation(s) or Measure(s). Measurements are typically depicted in design notation
with the symbol "O". If the same measurement or observation is taken at every point in
time in a design, then this "O" will be sufficient. Similarly, if the same set of measures is
given at every point in time in this study, the "O" can be used to depict the entire set of
measures. However, if different measures are given at different times it is useful to
subscript the "O" to indicate which measurement is being given at which point in time.
4. Groups or Individuals. The final design element consists of the intact groups or the
individuals who participate in various conditions. Typically, there will be one or more
program and comparison groups. In design notation, each group is indicated on a
separate line. Furthermore, the manner in which groups are assigned to the conditions
can be indicated by an appropriate symbol at the beginning of each line. Here, "R" will
represent a group, which was randomly assigned, "N" will depict a group, which was
nonrandom assigned (i.e., a nonequivalent group or cohort) and a "C" will indicate that
the group was assigned using a cutoff score on a measurement.
Q 4. How is the Case Study method useful in Business Research? Give two specific examples of
how the case study method can be applied to business research.
Ans.: While case study writing may seem easy at first glance, developing an effective case study
(also called a success story) is an art. Like other marketing communication skills, learning how to
write a case study takes time. What’s more, writing case studies without careful planning usually
results in sub optimal results?
Savvy case study writers increase their chances of success by following these ten proven
techniques for writing an effective case study:
Involve the
customer
throughout the
process. Involving the customer throughout the case study development process helps ensure
customer cooperation and approval, and results in an improved case study. Obtain customer
permission before writing the document, solicit input during the development, and secure
approval after drafting the document.
• Write all customer quotes for their review. Rather than asking the customer to draft
their quotes, writing them for their review usually results in more compelling material.
Q 5. What are the differences between observation and interviewing as methods of data
collection? Give two specific examples of situations where either observation or interviewing
would be more appropriate.
Ans.: Observation means viewing or seeing. Observation may be defined as a systematic
viewing of a specific phenomenon on its proper setting for the specific purpose of gathering data
for a particular study. Observation is classical method of scientific study.
• Observations must be done under conditions, which will permit accurate results. The
observer must be in vantage point to see clearly the objects to be observed. The
distance and the light must be satisfactory. The mechanical devices used must be in
good working conditions and operated by skilled persons.
• The accuracy and completeness of recorded results must be checked. A certain number
of cases can be observered again by another observer/another set of mechanical
devices as the case may be. If it is feasible two separate observers and set of
instruments may be used in all or some of the original observations. The results could
then be compared to determine their accuracy and completeness.
Advantages of observation
o The main virtue of observation is its directness it makes it possible to study
behavior as it occurs. The researcher needs to ask people about their behavior
and interactions he can simply watch what they do and say.
o Observations in more suitable for studying subjects who are unable to articulate
meaningfully e.g. studies of children, tribal animals, birds etc.
o Observations improve the opportunities for analyzing the contextual back ground
of behavior. Furthermore verbal resorts can be validated and compared with
behavior through observation. The validity of what men of position and authority
say can be verified by observing what they actually do.
o Observation is less demanding of the subjects and has less biasing effect on
their conduct than questioning.
o Mechanical devices may be used for recording data in order to secure more
accurate data and also of making continuous observations over longer periods.
Interviews are a crucial part of the recruitment process for all Organisations. Their purpose is to
give the interviewer(s) a chance to assess your suitability for the role and for you to demonstrate
your abilities and personality. As this is a two-way process, it is also a good opportunity for you to
ask questions and to make sure the organisation and position are right for you.
Interview format
Interviews take many different forms. It is a good idea to ask the organisation in advance what
format the interview will take.
The organisation determines the selection criteria based on the roles they are recruiting
for and then, in an interview, examines whether or not you have evidence of possessing
these.
Recruitment Manager, The Cooperative Group
• Technical interviews - If you have applied for a job or course that requires technical
knowledge, it is likely that you will be asked technical questions or has a separate
technical interview. Questions may focus on your final year project or on real or
hypothetical technical problems. You should be prepared to prove yourself, but also to
admit to what you do not know and stress that you are keen to learn. Do not worry if you
do not know the exact answer - interviewers are interested in your thought process and
logic.
• Academic interviews - These are used for further study or research positions.
Questions are likely to center on your academic history to date.
• Structured interviews - The interviewer has a set list of questions, and asks all the
candidates the same questions.
• Formal/informal interviews - Some interviews may be very formal, while others will feel
more like an informal chat about you and your interests. Be aware that you are still being
assessed, however informal the discussion may seem.
• Portfolio based interviews - If the role is within the arts, media or communications
industries, you may be asked to bring a portfolio of your work to the interview, and to
have an in-depth discussion about the pieces you have chosen to include.
• Senior/case study interviews - These ranges from straightforward scenario questions
(e.g. ‘What would you do in a situation where…?’) to the detailed analysis of a
hypothetical business problem. You will be evaluated on your analysis of the problem,
how you identify the key issues, how you pursue a particular line of thinking and whether
you can develop and present an appropriate framework for organising your thoughts.
Companies use screening tools to ensure that candidates meet minimum qualification
requirements. Computer programs are among the tools used to weed out unqualified candidates.
(This is why you need a digital resume that is screening-friendly. See our resume center for help.)
Sometimes human professionals are the gatekeepers. Screening interviewers often have honed
skills to determine whether there is anything that might disqualify you for the position. Remember-
they does not need to know whether you are the best fit for the position, only whether you are not
a match. For this reason, screeners tend to dig for dirt. Screeners will hone in on gaps in your
employment history or pieces of information that look inconsistent. They also will want to know
from the outset whether you will be too expensive for the company.
Some tips for maintaining confidence during screening interviews:
On the opposite end of the stress spectrum from screening interviews is the informational
interview. A meeting that you initiate, the informational interview is underutilized by job-seekers
who might otherwise consider themselves savvy to the merits of networking. Job seekers
ostensibly secure informational meetings in order to seek the advice of someone in their current
or desired field as well as to gain further references to people who can lend insight. Employers
that like to stay apprised of available talent even when they do not have current job openings, are
often open to informational interviews, especially if they like to share their knowledge, feel
flattered by your interest, or esteem the mutual friend that connected you to them. During an
informational interview, the jobseeker and employer exchange information and get to know one
another better without reference to a specific job opening.
This takes off some of the performance pressure, but be intentional nonetheless:
• Come prepared with thoughtful questions about the field and the company.
• Gain references to other people and make sure that the interviewer would be comfortable
if you contact other people and use his or her name.
• Give the interviewer your card, contact information and resume.
• Write a thank you note to the interviewer.
In this style of interview, the interviewer has a clear agenda that he or she follows unflinchingly.
Sometimes companies use this rigid format to ensure parity between interviews; when
interviewers ask each candidate the same series of questions, they can more readily compare the
results. Directive interviewers rely upon their own questions and methods to tease from you what
they wish to know. You might feel like you are being steam-rolled, or you might find the
conversation develops naturally. Their style does not necessarily mean that they have dominance
issues, although you should keep an eye open for these if the interviewer would be your
supervisor.
The following strategies, which are helpful for any interview, are particularly important when
interviewers use a non-directive approach:
• Come to the interview prepared with highlights and anecdotes of your skills, qualities and
experiences. Do not rely on the interviewer to spark your memory-jot down some notes
that you can reference throughout the interview.
• Remain alert to the interviewer. Even if you feel like you can take the driver's seat and go
in any direction you wish, remain respectful of the interviewer's role. If he or she becomes
more directive during the interview, adjust.
• Ask well-placed questions. Although the open format allows you significantly to shape the
interview, running with your own agenda and dominating the conversation means that
you run the risk of missing important information about the company and its needs.
Q 6. Case Study: You are engaged to carry out a market survey on behalf of a leading
Newspaper that is keen to increase its circulation in Bangalore City, in order to ascertain
reader habits and interests. What type of research report would be most appropriate?
Develop an outline of the research report with the main sections.
Ans.: There are four major interlinking processes in the presentation of a literature review:
1. Critiquing rather than merely listing each item a good literature review is led by your own
critical thought processes - it is not simply a catalogue of what has been written.
Once you have established which authors and ideas are linked, take each group in turn
and really think about what you want to achieve in presenting them this way. This is your
opportunity for showing that you did not take all your reading at face value, but that you
have the knowledge and skills to interpret the authors' meanings and intentions in relation
to each other, particularly if there are conflicting views or incompatible findings in a
particular area.
Rest assured that developing a sense of critical judgment in the literature surrounding a
topic is a gradual process of gaining familiarity with the concepts, language, terminology
and conventions in the field. In the early stages of your research you cannot be expected
to have a fully developed appreciation of the implications of all findings.
As you get used to reading at this level of intensity within your field you will find it easier
and more purposeful to ask questions as you read:
As you begin to group together the items you read, the direction of your literature review
will emerge with greater clarity. This is a good time to finalise your concept map, grouping
linked items, ideas and authors into firm categories as they relate more obviously to your
own study.
Now you can plan the structure of your written literature review, with your own intentions
and conceptual framework in mind. Knowing what you want to convey will help you
decide the most appropriate structure.
It is likely that your literature review will contain elements of all of these.
o An introduction
o A body
o A conclusion
The introduction sets the scene and lays out the various elements that are to be
explored.
The body takes each element in turn, usually as a series of headed sections and
subsections. The first paragraph or two of each section mentions the major authors in
association with their main ideas and areas of debate. The section then expands on
these ideas and authors, showing how each relates to the others, and how the debate
informs your understanding of the topic. A short conclusion at the end of each section
presents a synthesis of these linked ideas.
The final conclusion of the literature review ties together the main points from each of
your sections and this is then used to build the framework for your own study. Later,
when you come to write the discussion chapter of your thesis, you should be able to
relate your findings in one-to-one correspondence with many of the concepts or
questions that were firmed up in the conclusion of your literature review.
3. Controlling the 'voice' of your citations in the text (by selective use of direct quoting,
paraphrasing and summarizing)
You can treat published literature like any other data, but the difference is that it is not
data you generated yourself.
When you report on your own findings, you are likely to present the results with reference
to their source, for example:
o 'Positive responses were recorded for 80 per cent of the subjects (see table 2).'
o 'From the results shown in table 2, it appears that the majority of subjects
responded positively.'
In these examples your source of information is table 2. Had you found the same results
on page 17 of a text by Smith published in 1988, you would naturally substitute the name,
date and page number for 'table 2'. In each case it would be your voice introducing a fact
or statement that had been generated somewhere else.
You could see this process as building a wall: you select and place the 'bricks' and your
'voice' provides the ‘mortar’, which determines how strong the wall will be. In turn, this is
significant in the assessment of the merit and rigor of your work.
There are three ways to combine an idea and its source with your own voice:
o Direct quote
o Paraphrase
o Summary
In each method, the author's name and publication details must be associated with the
words in the text, using an approved referencing system. If you don't do this you would be
in severe breach of academic convention, and might be penalized. Your field of study has
its own referencing conventions you should investigate before writing up your results.
Direct quoting repeats exact wording and thus directly represents the author:
o 'Rain is likely when the sky becomes overcast' (Smith 1988, page 27).
If the quotation is run in with your text, single quotation marks are used to enclose it, and
it must be an identical copy of the original in every respect.
Overuse or simple 'listing' of quotes can substantially weaken your own argument by
silencing your critical view or voice.
Paraphrasing is repeating an idea in your own words, with no loss of the author's
intended meaning:
o As Smith (1988) pointed out in the late eighties, rain may well be indicated by the
presence of cloud in the sky.
Paraphrasing allows you to organize the ideas expressed by the authors without being
rigidly constrained by the grammar, tense and vocabulary of the original. You retain a
degree of flexibility as to whose voice comes through most strongly.
Summarizing means to shorten or crystallize a detailed piece of writing by restating the
main points in your own words and in the order in which you found them. The original
writing is 'described' as if from the outside, and it is your own voice that is predominant:
o Referring to the possible effects of cloudy weather, Smith (1988) predicted the
likelihood of rain.
o Smith (1988) claims that some degree of precipitation could be expected as the
result of clouds in the sky: he has clearly discounted the findings of Jones (1986).
4. Using appropriate language
Your writing style represents you as a researcher, and reflects how you are dealing with
the subtleties and complexities inherent in the literature.
Once you have established a good structure with appropriate headings for your literature
review, and once you are confident in controlling the voice in your citations, you should
find that your writing becomes more lucid and fluent because you know what you want to
say and how to say it.
The good use of language depends on the quality of the thinking behind the writing, and
on the context of the writing. You need to conform to discipline-specific requirements.
However, there may still be some points of grammar and vocabulary you would like to
improve. If you have doubts about your confidence to use the English language well, you
can help yourself in several ways:
o Ask for feedback on your writing from friends, colleagues and academics
o Look for specific language information in reference materials
o Access programs or self-paced learning resources which may be available on
your campus
o Recent events or actions that are still linked in an unresolved way to the present:
Several studies have attempted to...
o Convey your meaning in the simplest possible way. Don't try to use an
intellectual tone for the sake of it, and do not rely on your reader to read your
mind!
o Keep sentences short and simple when you wish to emphasise a point.
o Use compound (joined simple) sentences to write about two or more ideas which
may be linked with 'and', 'but', 'because', 'whereas' etc.
o Use complex sentences when you are dealing with embedded ideas or those that
show the interaction of two or more complex elements.
o Verbs are more dynamic than nouns, and nouns carry information more densely
than verbs.
o Select active or passive verbs according to whether you are highlighting the
'doer' or the 'done to' of the action.
o Keep punctuation to a minimum. Use it to separate the elements of complex
sentences in order to keep subject, verb and object in clear view.
o Avoid densely packed strings of words, particularly nouns.
Introduction
I looked at the situation and found that I had a question to ask about it. I wanted to investigate
something in particular.
Review of literature
So I read everything I could find on the topic - what was already known and said and what had
previously been found. I established exactly where my investigation would fit into the big picture,
and began to realise at this stage how my study would be different from anything done previously.
Methodology
I decided on the number and description of my subjects, and with my research question clearly in
mind, designed my own investigation process, using certain known research methods (and
perhaps some that are not so common). I began with the broad decision about which research
paradigm I would work within (that is, qualitative/quantitative, critical/interpretive/ empiricist). Then
I devised my research instrument to get the best out of what I was investigating. I knew I would
have to analyse the raw data, so I made sure that the instrument and my proposed method(s) of
analysis were compatible right from the start. Then I carried out the research study and recorded
all the data in a methodical way according to my intended methods of analysis. As part of the
analysis, I reduced the data (by means of my preferred form of classification) to manageable
thematic representation (tables, graphs, categories, etc). It was then that I began to realise what I
had found.
Findings/results
What had I found? What did the tables/graphs/categories etc. have to say that could be pinned
down? It was easy enough for me to see the salient points at a glance from these records, but in
writing my report, I also spelled out what I had found truly significant to make sure my readers did
not miss it. For each display of results, I wrote a corresponding summary of important
observations relating only elements within my own set of results and comparing only like with like.
I was careful not to let my own interpretations intrude or voice my excitement just yet. I wanted to
state the facts - just the facts. I dealt correctly with all inferential statistical procedures, applying
tests of significance where appropriate to ensure both reliability and validity. I knew that I wanted
my results to be as watertight and squeaky clean as possible. They would carry a great deal more
credibility, strength and thereby academic 'clout' if I took no shortcuts and remained both rigorous
and scholarly.
Discussion
Now I was free to let the world know the significance of my findings. What did I find in the results
that answered my original research question? Why was I so sure I had some answers? What
about the unexplained or unexpected findings? Had I interpreted the results correctly? Could
there have been any other factors involved? Were my findings supported or contested by the
results of similar studies? Where did that leave mine in terms of contribution to my field? Can I
actually generalise from my findings in a breakthrough of some kind, or do I simply see myself as
reinforcing existing knowledge? And so what, after all? There were some obvious limitations to
my study, which, even so, I'll defend to the hilt. But I won't become over-apologetic about the
things left undone, or the abandoned analyses, the fascinating byways sadly left behind. I have
my memories...
Conclusion
We'll take a long hard look at this study from a broad perspective. How does it rate? How did I
end up answering the question I first thought of? The conclusion needs to be a few clear, succinct
sentences. That way, I'll know that I know what I'm talking about. I'll wrap up with whatever
generalizations I can make, and whatever implications have arisen in my mind as a result of
doing this thing at all. The more you find out, the more questions arise. How I wonder what you
are ... how I speculate. OK, so where do we all go from here?
1. Reading
2. Research design and implementation
3. Writing up the research report or thesis
Use an active, cyclical writing process: draft, check, reflect, revise, redraft.
Q 1. Give examples of specific situations that would call for the following types of research,
explaining why – a) Exploratory research b) Descriptive research c) Diagnostic research d)
Evaluation research. (10 marks)
Research may be classified crudely according to its major intent or the methods. According
to the intent, research may be classified as:
Basic (aka fundamental or pure) research is driven by a scientist's curiosity or interest in a
scientific question. The main motivation is to expand man's knowledge, not to create or invent
something. There is no obvious commercial value to the discoveries that result from basic
research.
For example, basic science investigations probe for answers to questions such as:
•
How did the universe begin?
•
What are protons, neutrons, and electrons composed of?
•
How do slime molds reproduce?
•
What is the specific genetic code of the fruit fly?
Most scientists believe that a basic, fundamental understanding of all branches of science is
needed in order for progress to take place. In other words, basic research lays down the
foundation for the applied science that follows. If basic work is done first, then applied spin-offs
often eventually result from this research. As Dr. George Smoot of LBNL says, "People cannot
foresee the future well enough to predict what's going to develop from basic research. If we only
did applied research, we would still be making better spears."
Applied research is designed to solve practical problems of the modern world, rather than to
acquire knowledge for knowledge's sake. One might say that the goal of the applied scientist is to
improve the human condition.
Research in common parlance refers to a search for knowledge. The Advance Learner’s
Dictionary of Current English defines the research as “careful investigation or inquiry through
search for new facts in any branch of knowledge”. Redman and Mory defines research as
“Systemized efforts to gain new knowledge”. Some people consider research as a
movement, a movement from the known to unknown. According to Clifford Woody research
compromises defining and redefining problems, formulating hypothesis or suggested
solutions, collecting, organizing and evaluating data, making deductions and reaching
conclusions and at last carefully testing the conclusions to determine whether they fit the
formulating hypothesis. In general ‘research refers to the systematic method consisting of
enunciation the problem, formulating a hypothesis, collecting the facts or data, analyzing
the facts and researching certain conclusions ether in the form of solutions towards the
concerned problem or in certain generalization for some theoretically formulation.
Objectives: The main aim of research is to find out the truth which is hidden and has not been
discovered yet. The research objectives are:
• To gain familiarity with a phenomenon or to achieve new insight into it studies with this
object in view are termed as exploratory or formulative research studies.
• To portray accurately the characteristics of particular individual, situation or group. These
are called descriptive research studies.
• To determine the frequency with which something occurs or with which it is associated
with something else. This study is known as diagnostic research study.
• To test a hypothesis of a casual relationship between variables. Such study is known as
testing research studies.
Motivation in Research: The possible motives for doing research may be either one or more of
the following:
• Desire to get a research degree along with its consequential benefits.
• Desire to face the challenge in solving the unsolved problems, i.e. concern over practical
problems initiates research.
• Desire to get intellectual joy of doing some creative work.
• Desire to be of service of society.
• Desire to get respectability.
a) Exploratory Research
It is also known as formulative research. It is preliminary study of an unfamiliar
problem about which the researcher has little or no knowledge. It is ill-structured and
much less focused on pre-determined objectives. It usually takes the form of a pilot
study. The purpose of this research may be to generate new ideas, or to increase the
researcher’s familiarity with the problem or to make a precise formulation of the
problem or to gather information for clarifying concepts or to determine whether it is
feasible to attempt the study. Katz conceptualizes two levels of exploratory studies.
“At the first level is the discovery of the significant variable in the situations; at the
second, the discovery of relationships between variables.”
b) Descriptive Study
It is a fact -finding investigation with adequate interpretation. It is the simplest type of
research. It is more specific than an exploratory research. It aims at identifying the
various characteristics of a community or institution or problem under study and also
aims at a classification of the range of elements comprising the subject matter of
study. It contributes to the development of a young science and useful in verifying
focal concepts through empirical observation. It can highlight important
methodological aspects of data collection and interpretation. The information
obtained may be useful for prediction about areas of social life outside the
boundaries of the research. They are valuable in providing facts needed for planning
social action program.
c) Diagnostic Study
It is similar to descriptive study but with a different focus. It is directed towards
discovering what is happening, why it is happening and what can be done about. It
aims at identifying the causes of a problem and the possible solutions for it. It may
also be concerned with discovering and testing whether certain variables are
associated. This type of research requires prior knowledge of the problem, its
thorough formulation, clear-cut definition of the given population, adequate methods
for collecting accurate information, precise measurement of variables, statistical
analysis and test of significance.
d) Evaluation Studies
It is a type of applied research. It is made for assessing the effectiveness of social or
economic programmes implemented or for assessing the impact of developmental
projects on the development of the project area. It is thus directed to assess or
appraise the quality and quantity of an activity and its performance, and to specify its
attributes and conditions required for its success. It is concerned with causal
relationships and is more actively guided by hypothesis. It is concerned also with
change over time.
Q 2.In the context of hypothesis testing, briefly explain the difference between a) Null and
alternative hypothesis b) Type 1 and type 2 error c) Two tailed and one tailed test d)
Parametric and non parametric tests. (10 marks)
Q 3. Explain the difference between a causal relationship and correlation, with an example of
each. What are the possible reasons for a correlation between two variables? (10 marks)
Q 4. Briefly explain any two factors that affect the choice of a sampling technique. What are
the characteristics of a good sample? (10 marks)
Q 5. Select any topic for research and explain how you will use both secondary and primary
sources to gather the required information. (10 marks)
Q 6.Case Study: You are engaged to carry out a market survey on behalf of a leading
Newspaper that is keen to increase its circulation in BangaloreCity, in order to ascertain
reader habits and interests. Develop a title for the study; define the research problem and the
objectives or questions be answered by the study. (10 marks)
MB0050 Research Methodology - 4 Credits
Q 2. In processing data, what is the difference between measures of central tendency and
measures of dispersion? What is the most important measure of central tendency and
dispersion? (10 marks).
Q 3. What are the characteristics of a good research design? Explain how the research design
for exploratory studies is different from the research design for descriptive and diagnostic
studies.( 10 marks).
Q 4. How is the Case Study method useful in Business Research? Give two specific examples
of how the case study method can be applied to business research. (10 marks).
Q 5. What are the differences between observation and interviewing as methods of data
collection? Give two specific examples of situations where either observation or interviewing
would be more appropriate.( 10 marks).
Q 6.Case Study: You are engaged to carry out a market survey on behalf of a leading
Newspaper that is keen to increase its circulation in BangaloreCity, in order to ascertain
reader habits and interests. What type of research report would be most appropriate? Develop
an outline of the research report with the main sections.(10 marks).
Q.2 Define negotiable instrument. What are its features and characteristics? Which are
the different types of negotiable instruments? If Mr. A is the holder of a negotiable
instrument, under what situations
Indemnity Guarantee
Comprise only two parties- the There are three parties namely
indemnifier and the indemnity the surety, principal debtor and
holder thecre d ito r
Q.4. a. Mention the remedies for breach of contract. How will the injured party claim it?
[8 marks]
b. What is the difference between anticipatory and actual breach? [2 marks]
Ans.: Breach of Contract & Remedies:
Nature of breach
A breach of contract occurs where a party to a contract fails to perform, precisely and
exactly, his
obligations under the contract. This can take various forms for example, the failure to
supply
goods or perform a service as agreed. Breach of contract may be either actual or
anticipatory.
Actual breach occurs where one party refuses to form his side of the bargain on the due
date or
performs incompletely. For example: Poussard v Spiers and Bettini v Gye.
Anticipatory breach occurs where one party announces, in advance of the due date for
performance, that he intends not to perform his side of the bargain. The innocent party
may sue
for damages immediately the breach is announced. Hochster v De La Tour is an example.
Effects of breach A breach of contract, no matter what form it may take, always entitles
the
innocent party to maintain an action for damages, but the rule established by a long line
of
authorities is that the right of a party to treat a contract as discharged arises only in three
situations.
The breaches, which give the innocent party the option of terminating the contract, are:
(a) Renunciation
Renunciation occurs where a party refuses to perform his obligations under the contract.
It may
be either express or implied. Hochster v De La Tour is a case law example of express
renunciation.
Renunciation is implied where the reasonable inference from the defendant’s conduct is
that he
no longer intends to perform his side of the contract. For example: Omnium D’Enterprises
v
Sutherland.
(b) Breach of condition
The second repudiator breach occurs where the party in default has committed a breach
of
condition. Thus, for example, in Poussard v Spiers the employer had a right to terminate
the
soprano’s employment when she failed to arrive for performances.
(c) Fundamental breach
The third repudiator breach is where the party in breach has committed a serious (or
fundamental) breach of an in nominate term or totally fails to perform the contract.
A repudiator breach does not automatically bring the contract to an end. The innocent
party has
two options: He may treat the contract as discharged and bring an action for damages for
breach
of contract immediately. This is what occurred in, for example, Hochster v De La Tour.
He may elect to treat the contract as still valid, complete his side of the bargain and then
sue for
payment by the other side. For example, White and Carter Ltd v McGregor.
2 Introduction to remedies
Damages are the basic remedy available for a breach of contract. It is a common law
remedy that
can be claimed as of right by the innocent party.
Ans.: Breach of Contract & Remedies:
Nature of breach
A breach of contract occurs where a party to a contract fails to perform, precisely and
exactly, his
obligations under the contract. This can take various forms for example, the failure to
supply
goods or perform a service as agreed. Breach of contract may be either actual or
anticipatory.
Actual breach occurs where one party refuses to form his side of the bargain on the due
date or
performs incompletely. For example: Poussard v Spiers and Bettini v Gye.
Anticipatory breach occurs where one party announces, in advance of the due date for
performance, that he intends not to perform his side of the bargain. The innocent party
may sue
for damages immediately the breach is announced. Hochster v De La Tour is an example.
Effects of breach A breach of contract, no matter what form it may take, always entitles
the
innocent party to maintain an action for damages, but the rule established by a long line
of
authorities is that the right of a party to treat a contract as discharged arises only in three
situations.
The breaches, which give the innocent party the option of terminating the contract, are:
(a) Renunciation
Renunciation occurs where a party refuses to perform his obligations under the contract.
It may
be either express or implied. Hochster v De La Tour is a case law example of express
renunciation.
Renunciation is implied where the reasonable inference from the defendant’s conduct is
that he
no longer intends to perform his side of the contract. For example: Omnium D’Enterprises
v
Sutherland.
(b) Breach of condition
The second repudiator breach occurs where the party in default has committed a breach
of
condition. Thus, for example, in Poussard v Spiers the employer had a right to terminate
the
soprano’s employment when she failed to arrive for performances.
(c) Fundamental breach
The third repudiator breach is where the party in breach has committed a serious (or
fundamental) breach of an in nominate term or totally fails to perform the contract.
A repudiator breach does not automatically bring the contract to an end. The innocent
party has
two options: He may treat the contract as discharged and bring an action for damages for
breach
of contract immediately. This is what occurred in, for example, Hochster v De La Tour.
He may elect to treat the contract as still valid, complete his side of the bargain and then
sue for
payment by the other side. For example, White and Carter Ltd v McGregor.
2 Introduction to remedies
Damages are the basic remedy available for a breach of contract. It is a common law
remedy that
can be claimed as of right by the innocent party.
In order to recover substantial damages the innocent party must show that he has
suffered actual
loss; if there is no actual loss he will only be entitled to nominal damages in recognition of
the fact
that he has a valid cause of action. In making an award of damages, the court has two
major
considerations:
Remoteness – for what consequences of the breach is the defendant legally responsible?
The measure of damages – the principles upon which the loss or damage is evaluated or
quantified in monetary terms. The second consideration is quite distinct from the first,
and can be
decided by the court only after the first has been determined.
2.Remoteness of loss
The rule governing remoteness of loss in contract was established in Hadley v Baxendale.
The
court established the principle that where one party is in breach of contract, the other
should
receive damages which can fairly and reasonably be considered to arise naturally from
the
breach of contract itself (‘in the normal course of things’), or which may reasonably be
assumed
to have been within the contemplation of the parties at the time they made the contract
as being
the probable result of a breach.
Thus, there are two types of loss for which damages may be recovered:
1. What arises naturally; and
2. What the parties could foresee when the contract was made as the likely result of
breach.
As a consequence of the first limb of the rule in Hadley v Baxendale, the party in breach is
deemed to expect the normal consequences of the breach, whether he actually expected
them or
not. Under the second limb of the rule, the party in breach can only be held liable for
abnormal
consequences where he has actual knowledge that the abnormal consequences might
follow or
where he reasonably ought to know that the abnormal consequences might follow –
Victoria
Laundry v Newman Industries.
3.The measure (or quantum) of damages
In assessing the amount of damages payable, the courts use the following principles:
The amount of damages is to compensate the claimant for his loss not to punish the
defendant.
Damages are compensatory – not restitutionary.
The most usual basis of compensatory damages is to put the innocent party into the
same
financial position he would have been in had the contract been properly performed. This
is
sometimes called the ‘expectation loss’ basis. In Victoria Laundry v Newman Industries,
for
example, Victoria Laundry were claiming for the profits they would have made had the
boiler been
installed on the contractually agreed date.
Sometimes a claimant may prefer to frame his claim in the alternative on the ‘reliance
loss’ basis
and thereby recover expenses incurred in anticipation of performance and wasted as a
result of
the breach – Anglia Television v Reed. In a contract for the sale of goods, the statutory
(Sale of
Goods Act 1979) measure of damages is the difference between the market price at the
date of
the breach and the contract price, so that only nominal damages will be awarded to a
claimant
buyer or claimant seller if the price at the date of breach was respectively less or more
than the
contract price. In fixing the amount of damages, the courts will usually deduct the tax (if
any)
which would have been payable by the claimant if the contract had not been broken. Thus
if
damages are awarded for loss of earnings, they will normally be by reference to net, not
gross,
pay. Difficulty in assessing the amount of damages does not prevent the injured party
from
receiving them: Chaplin v Hicks. In general, damages are not awarded for non-pecuniary
loss
such as mental distress and loss of enjoyment. Exceptionally, however, damages are
awarded for
such losses where the contract’s purpose is to promote happiness or enjoyment, as is the
situation with contracts for holidays – Jarvis v Swan Tours. The innocent party must take
reasonable steps to mitigate (minimise) his loss, for example, by trying to find an
alternative
method of performance of the contract: Brace v Calder.
4.Liquidated damages clauses and penalty clauses
If a contract includes a provision that, on a breach of contract, damages of a certain
amount or
calculable at a certain rate will be payable, the courts will normally accept the relevant
figure as a
measure of damages. Such clauses are called liquidated damages clauses.
The courts will uphold a liquidated damages clause even if that means that the injured
party
receives less (or more as the case may be) than his actual loss arising on the breach. This
is
because the clause setting out the damages constitutes one of the agreed contractual
terms –
Cellulose Acetate Silk Co Ltd v Widnes Foundry Ltd.
However, a court will ignore a figure for damages put in a contract if it is classed as a
penalty
clause – that is, a sum which is not a genuine pre-estimate of the expected loss on
breach.
This could be the case where
Sometimes a claimant may prefer to frame his claim in the alternative on the ‘reliance
loss’ basis
and thereby recover expenses incurred in anticipation of performance and wasted as a
result of
the breach – Anglia Television v Reed. In a contract for the sale of goods, the statutory
(Sale of
Goods Act 1979) measure of damages is the difference between the market price at the
date of
the breach and the contract price, so that only nominal damages will be awarded to a
claimant
buyer or claimant seller if the price at the date of breach was respectively less or more
than the
contract price. In fixing the amount of damages, the courts will usually deduct the tax (if
any)
which would have been payable by the claimant if the contract had not been broken. Thus
if
damages are awarded for loss of earnings, they will normally be by reference to net, not
gross,
pay. Difficulty in assessing the amount of damages does not prevent the injured party
from
receiving them: Chaplin v Hicks. In general, damages are not awarded for non-pecuniary
loss
such as mental distress and loss of enjoyment. Exceptionally, however, damages are
awarded for
such losses where the contract’s purpose is to promote happiness or enjoyment, as is the
situation with contracts for holidays – Jarvis v Swan Tours. The innocent party must take
reasonable steps to mitigate (minimise) his loss, for example, by trying to find an
alternative
method of performance of the contract: Brace v Calder.
4.Liquidated damages clauses and penalty clauses
If a contract includes a provision that, on a breach of contract, damages of a certain
amount or
calculable at a certain rate will be payable, the courts will normally accept the relevant
figure as a
measure of damages. Such clauses are called liquidated damages clauses.
The courts will uphold a liquidated damages clause even if that means that the injured
party
receives less (or more as the case may be) than his actual loss arising on the breach. This
is
because the clause setting out the damages constitutes one of the agreed contractual
terms –
Cellulose Acetate Silk Co Ltd v Widnes Foundry Ltd.
However, a court will ignore a figure for damages put in a contract if it is classed as a
penalty
clause – that is, a sum which is not a genuine pre-estimate of the expected loss on
breach.
This could be the case where
b. What is the difference between anticipatory and actual breach?
Ans.: Anticipatory Breach:
A seller and a buyer have entered into a contract. Prior to the start of the contract, the
buyer
informs the seller that he no longer requires his goods. The seller writes back stating his
intention
to store the goods until the contract expires and then sue for a breach of contract. The
buyer
replies with an angry letter stating that he could just sell the goods to someone else.
Advise all
parties.
Actual breach:
A breach of contract occurs where a party to a contract fails to perform, precisely and
exactly, his
obligations under the contract. This can take various forms for example, the failure to
supply
goods or perform a service as agreed. Breach of contract may be either actual or
anticipatory.
Actual breach occurs where one party refuses to form his side of the bargain on the due
date or
performs incompletely. For example: Poussard v Spiers and Bettini v Gye.
Ans.: There are many types of businesses, and because of this, businesses are classified
in
many ways. One of the most common focuses on the primary profit-generating activities
of a
business:
•
Agriculture and mining businesses are concerned with the production of raw material,
such as plants or minerals.
•
Financial businesses include banks and other companies that generate profit through
investment and management of capital.
•
Information businesses generate profits primarily from the resale of intellectual property
and include movie studios, publishers and packaged software companies.
•
Manufacturers produce products, from raw materials or component parts, which they then
sell at a profit. Companies that make physical goods, such as cars or pipes, are
considered manufacturers
Real estate businesses generate profit from the selling, renting, and development of
properties, homes, and buildings.
•
Retailers and Distributors act as middle-men in getting goods produced by manufacturers
to the intended consumer, generating a profit as a result of providing sales or distribution
services. Most consumer-oriented stores and catalogue companies are distributors or
retailers. See also: Franchising
•
Service businesses offer intangible goods or services and typically generate a profit by
charging for labor or other services provided to government, other businesses, or
consumers. Organizations ranging from house decorators to consulting firms,
restaurants, and even entertainers are types of service businesses.
•
Transportation businesses deliver goods and individuals from location to location,
generating a profit on the transportation costs
•
Utilities produce public services, such as heat, electricity, or sewage treatment, and are
usually government chartered.
There are many other divisions and subdivisions of businesses. The authoritative list of
business
types for North America is generally considered to be the North American Industry
Classification
System, or NAICS. The equivalent European Union list is the Statistical Classification of
Economic Activities in the European Community (NACE).
Management
The efficient and effective operation of a business, and study of this subject, is called
management. The main branches of management are financial management, marketing
management, human resource management, strategic management, production
management,
operation management, service management and information technology management.
Reforming State Enterprises
In recent decades, assets and enterprises that were run by various states have been
modeled
after business enterprises. In 2003, the People's Republic of China reformed 80% of itsst
at e -
owned enterprises and modeled them on a company-type management system.[2] Many
state
institutions and enterprises in China and Russia have been transformed into joint-stock
companies, with part of their shares being listed on public stock markets.
Organization and government regulation
Most legal jurisdictions specify the forms of ownership that a business can take, creating
a body
of commercial law for each type.
The major factors affecting how a business is organized are usually:
The Bank of England in Threadneedle Street, London, England.
•
The size, scope of the business firm and its structure, management, and ownership,
broadly analyzed in the theory of the firm. Generally a smaller business is more flexible,
while larger businesses, or those with wider ownership or more formal structures, will
usually tend to be organized as partnerships or (more commonly) corporations. In
addition a business that wishes to raise money on a stock market or to be owned by a
wide range of people will often be required to adopt a specific legal form to do so.
•
The sector and country. Private profit making businesses are different from government
owned bodies. In some countries, certain businesses are legally obliged to be organized
in certain ways.
•
Limited liability. Corporations, limited liability partnerships, and other specific types of
business organizations protect their owners or shareholders from business failure by
doing business under a separate legal entity with certain legal protections. In contrast,
unincorporated businesses or persons working on their own are usually not so protected.
Tax advantages. Different structures are treated differently in tax law, and may have
advantages for this reason.
•
Disclosure and compliance requirements. Different business structures may be
required to make more or less information public (or reported to relevant authorities), and
may be bound to comply with different rules and regulations.
Many businesses are operated through a separate entity such as a corporation or a
partnership
(either formed with or without limited liability). Most legal jurisdictions allow people to
organize
such an entity by filing certain charter documents with the relevant Secretary of State or
equivalent and complying with certain other ongoing obligations. The relationships and
legal
rights of shareholders, limited partners, or members are governed partly by the charter
documents and partly by the law of the jurisdiction where the entity is organized.
Generally
speaking, shareholders in a corporation, limited partners in a limited partnership, and
members in
a limited liability company are shielded from personal liability for the debts and
obligations of the
entity, which is legally treated as a separate "person." This means that unless there is
misconduct, the owner's own possessions are strongly protected in law, if the business
does not
succeed.
Where two or more individuals own a business together but have failed to organize a
more
specialized form of vehicle, they will be treated as a general partnership. The terms of a
partnership are partly governed by a partnership agreement if one is created, and partly
by the
law of the jurisdiction where the partnership is located. No paperwork or filing is
necessary to
create a partnership, and without an agreement, the relationships and legal rights of the
partners
will be entirely governed by the law of the jurisdiction where the partnership is located.
A single person who owns and runs a business is commonly known as a sole proprietor,
whether
he or she owns it directly or through a formally organized entity.
A few relevant factors to consider in deciding how to operate a business include:
General partners in a partnership (other than a limited liability partnership), plus anyone
who personally owns and operates a business without creating a separate legal entity,
are personally liable for the debts and obligations of the business.
2.Generally, corporations are required to pay tax just like "real" people. In some tax
systems, this can give rise to so-called double taxation, because first the corporation
pays tax on the profit, and then when the corporation distributes its profits to its owners,
individuals have to include dividends in their income when they complete their personal
tax returns, at which point a second layer of income tax is imposed.
3. In most countries, there are laws which treat small corporations differently than large
ones. They may be exempt from certain legal filing requirements or labor laws, have
simplified procedures in specialized areas, and have simplified, advantageous, or slightly
different tax treatment.
4.To "go public" (sometimes called IPO) -- which basically means to allow a part of the
business to be owned by a wider range of investors or the public in general—you must
organize a separate entity, which is usually required to comply with a tighter set of laws
and procedures. Most public entities are corporations that have sold shares, but
increasingly there are also public LLCs that sell units (sometimes also called shares), and
other more exotic entities as well (for example,RE I Ts in the USA, Unit Trusts in the UK).
However, you cannot take a general partnership "public."
Types of meetings: Common types of meeting include:
1. Status Meetings, generally leader-led, which are about reporting by one-way
communication
2. Work Meeting, which produces a product or intangible result such as a decision
3. Staff meeting, typically a meeting between a manager and those that report to the
manager
4.Team meeting, a meeting among colleagues working on various aspects of a team
project
5.Ad-hoc meeting, a meeting called for a special purpose
6. Management meeting, a meeting among managers
7.Board meeting, a meeting of the Board of directors of an organization
8. One-on-one meeting, between two individuals
9.Off-site meeting, also called "offsite retreat" and known as an Awayday meeting in the UK
10.Kickoff meeting, the first meeting with the project team and the client of the project to
discuss the role of each team member
11. Pre-Bid Meeting, a meeting of various competitors and or contractors to visually
inspect a
jobsite for a future project. The meeting is normally hosted by the future customer or
engineer who wrote the project specification to ensure all bidders are aware of the details
and services expected of them. Attendance at the Pre-Bid Meeting may be mandatory.
Failure to attend usually results in a rejected bid.
Fall 2010
Master of Business Administration - MBA Semester 3
MB0035 – Legal Aspects of Business - 3 Credits
(Book ID: B0764)
Assignment Set- 2
60 Marks
Note: Each question carries 10 Marks. Answer all the questions.
Q.1 a. What is an arbitration agreement? Discuss its essentials. [8 marks]
b. What do you mean by mediation? [2 marks]
Ans.: Arbitration Agreement:
The foundation of arbitration is the arbitration agreement between the parties to submit to
arbitration all or certain disputes which have arisen or which may arise between them. Thus,
the
provision of arbitration can be made at the time of entering the contract itself, so that if any
dispute arises in future, the dispute can be referred to arbitrator as per the agreement. It is
also
possible to refer a dispute to arbitration after the dispute has arisen. Arbitration agreement
may
be in the form of an arbitration clause in a contract or in the form of a separate agreement.
The
agreement must be in writing and must be signed by both parties. The arbitration agreement
can
be by exchange of letters, document, telex, telegram etc
Court must refer the matter to arbitration in some cases: If a party approaches court despite
the
arbitration agreement, the other party can raise objection. However, such objection must be
raised before submitting his first statement on the substance of dispute. The original
arbitration
agreement or its certified copy must accompany such objection. On such application the
judicial
authority shall refer the parties to arbitration. Since the word used is “shall”, it is mandatory
for
judicial authority to refer the matter to arbitration. However, once the opposite party already
makes first statement to court, the matter has to continue in the court. Once other party for
referring the matter to arbitration makes an application, the arbitrator can continue with
arbitration
and even make an arbitral award.
1. It must be in writing [Section 7(3)]: Like the old law, the new law also requires the
arbitration
agreement to be in writing. It also provides in section 7(4) that an exchange of letters, telex,
telegrams, or other means of telecommunications can also provide a record of such an
agreement. Further, it is also provided that an exchange of claim and defense in which the
existence of an arbitration agreement is alleged by one party and not denied by the other,
will
also amount to be an arbitration agreement.
It is not necessary that the parties should sign such written agreement. All that is necessary is
that the parties should accept the terms of an agreement reduced in writing. The naming of
the
arbitrator in the arbitration agreement is not necessary. No particular form or formal
document is
necessary.
2. It must have all the essential elements of a valid contract: An agreement stands on the
same footing as any other agreement. Every person capable of entering into a contract may
be a
party to an arbitration agreement. The terms of the agreement must be definite and certain;
if the
terms are vague it is bad for indefiniteness.
3. The agreement must be to refer a dispute, present or future, between the parties to
arbitration: If there is no dispute, there can be no right to demand arbitration. A dispute
means
an assertion of a right by one party and repudiation thereof by another. A point as to which
there
is no dispute cannot be referred to arbitration. The dispute may relate to an act of
commission or
omission, for example, with holding a certificate to which a person is entitled or refusal to
register
a transfer of shares.
Under the present law, certain disputes such as matrimonial disputes, criminal
prosecution, questions relating to guardianship, questions about validity of a will etc. or
treated as
not suitable for arbitration. Section 2(3) of the new Act maintains this position. Subject to this
qualification Section 7(1) of the new Act makes it permissible to enter into an arbitration
agreement “in respect of a defined legal relationship whether contractual or not”.
4. An arbitration agreement may be in the form of an arbitration clause in a contract or in
the form of a separate agreement [Section 7(2)].
Appointment of Arbitrator: The parties can agree on a procedure for appointing the arbitrator
or
arbitrators. If they are unable to agree, each party will appoint one arbitrator and the two
appointed arbitrators will appoint the third arbitrator who will act as a presiding arbitrator
[Section
11(3)]. If one of the parties does not appoint an arbitrator within 30 days, or if two appointed
arbitrators do not appoint third arbitrator within 30 days, the party can request Chief Justice
to
appoint an arbitrator [Section 11(4)]. The Chief Justice can authorize any person or institution
to
appoint an arbitrator. [Some High Courts have authorized District Judge to appoint an
arbitrator].
In case of international commercial dispute, the application for appointment of arbitrator has
to be
made to Chief Justice of India. In case of other domestic disputes, application has to be made
to
Chief Justice of High Court within whose jurisdiction the parties are situated [Section 11(12)]
Challenge to Appointment of arbitrator: An arbitrator is expected to be independent and
impartial. If there are some circumstances due to which his independence or impartiality can
be
challenged, he must disclose the circumstances before his appointment [Section 12(1)].
Appointment of Arbitrator can be challenged only if
(a) Circumstances exist that give rise to justifiable doubts as to his independence or
impartiality
(b) He does not possess the qualifications agreed to by the parties [Section 12(3)].
Appointment
of arbitrator cannot be challenged on any other ground. The challenge to appointment has to
be
decided by the arbitrator himself. If he does not accept the challenge, the proceedings can
continue and the arbitrator can make the arbitral award. However, in such case, application
for
setting aside arbitral award can be made to Court. If the court agrees to the challenge, the
arbitral
award can be set aside [Section 13(6)]. Thus, even if the arbitrator does not accept the
challenge
to his appointment, the other party cannot stall further arbitration proceedings by rushing to
court.
The arbitration can continue and challenge can be made in Court only after arbitral award is
made.
Conduct of Arbitral Proceedings: The Arbitral Tribunal should treat the parties equally and
each party should be given full opportunity to present his case [Section 18]. The Arbitral
Tribunal
is not bound by Code of Civil Procedure, 1908 or Indian Evidence Act, 1872 [Section 19(1)].
The
parties to arbitration are free to agree on the procedure to be followed by the Arbitral
Tribunal. If
the parties do not agree to the procedure, the procedure will be as determined by the arbitral
tribunal.
Law of Limitation Applicable: Limitation Act, 1963 is applicable. For this purpose, date on
which the aggrieved party requests other party to refer the matter to arbitration shall be
considered. If on that date, the claim is barred under Limitation Act, the arbitration cannot
continue [Section 43(2)]. If Court sets Arbitration award aside, time spent in arbitration will be
excluded for purpose of Limitation Act. So that case in court or fresh arbitration can start.
Flexibility in respect of procedure, place and language: Arbitral Tribunal has full powers to
decide the procedure to be followed, unless parties agree on the procedure to be followed
[Section 19(3)]. The Tribunal also has powers to determine the admissibility, relevance,
materiality and weight of any evidence [Section 19(4)]. Place of arbitration will be decided by
mutual agreement. However, if the parties do not agree to the place, the same will be
decided by
tribunal [Section 20]. Similarly, language to be used in arbitral proceedings can be mutually
agreed. Otherwise, Arbitral Tribunal can decide [Section 22].
Submission of statement of claim and defense: The claimant should submit statement of
claims, points of issue and relief or remedy sought. The respondent shall state his defense in
respect of these particulars. All relevant documents must be submitted. Such claim or
defense
can be amended or supplemented any time [section 23].
Hearings and Written Proceedings: After submission of documents and defense, unless the
parties agree otherwise, the Arbitral Tribunal can decide whether there will be oral hearing or
proceedings can be conducted on the basis of documents and other materials. However, if
one of
the parties requests the hearing shall be oral. Sufficient advance notice of hearing should be
given to both the parties [Section 24]. [Thus, unless one party requests, oral hearing is not
compulsory].
Settlement during Arbitration: It is permissible for parties to arrive at mutual settlement even
when arbitration is proceeding. In fact, even the Tribunal can make efforts to encourage
mutual
settlement. If parties settle the dispute by mutual agreement, the arbitration shall be
terminated.
However, if both parties and the Arbitral Tribunal agree, the settlement can be recorded in
the
form of an arbitral award on agreed terms. Such Arbitral Award shall have the same force as
any
other Arbitral Award [Section 30].
Arbitral Award: Decision of Arbitral Tribunal is termed as 'Arbitral Award'. Arbitrator can
decide
the dispute ex aqua ET bono (In justice and in good faith) if both the parties expressly
authorize
him to do so [Section 28(2)]. The decision of Arbitral Tribunal will be by majority. The arbitral
award shall be in writing and signed by the members of the tribunal [Section 29]. The award
must
be in writing and signed by the members of Arbitral Tribunal [Section 31(1)]. It must state the
reasons for the award unless the parties have agreed that no reason for the award is to be
given
[Section 31(3)]. The award should be dated and place where it is made should be mentioned.
Copy of award should be given to each party. Tribunal can make interim award also [Section
31(6)].
Cost of Arbitration- Cost of arbitration means reasonable cost relating to fees and expenses of
arbitrators and witnesses, legal fees and expenses, administration fees of the institution
supervising the arbitration and other expenses in connection with arbitral proceedings. The
tribunal can decide the cost and share of each party [Section 3 1(8)]. If the parties refuse to
pay
the costs, the Arbitral Tribunal may refuse to deliver its award. In such case, any party can
approach Court. The Court will ask for deposit from the parties and on such deposit, the
Tribunal
will deliver the award. Then Court will decide the costs of arbitration and shall pay the same
to
Arbitrators. Balance, if any, will be refunded to the party [Section 39].
Intervention by Court - One of the major defects of earlier arbitration law was that the party
could access court almost at every stage of arbitration - right from appointment of arbitrator
to
implementation of final award. Thus, the defending party could approach court at various
stages
and stall the proceedings. Now, approach to court has been drastically curtailed. In some
cases,
if the party raises an objection, Arbitral Tribunal itself can give the decision on that objection.
After
the decision, the arbitration proceedings are continued and the aggrieved party can approach
Court only after Arbitral Award is made. Appeal to court is now only on restricted grounds. Of
course, Tribunal cannot be given unlimited and uncontrolled powers and supervision of Courts
cannot be totally eliminated.
Arbitration Act has Over-Riding Effect: Section 5 of Act clarifies that notwithstanding anything
contained in any other law for the time being in force, in matters governed by the Act, the
judicial
authority can intervene only as provided in this Act and not under any other Act.
Modes of Arbitration
(a) Arbitration without the intervention of the court. [Sec.3 to 19]
(b) Arbitration with the intervention of the court when there is no suit pending [Sec.20]
(c) Arbitration with the intervention of the court where a suit is pending. [Sec.21 to 25]
b. What do you mean by mediation?
Ans.: Meditation is a holistic discipline during which time the practitioner trains his or her
mind in
order to realize some benefit.
Meditation is generally an internal, personal practice and most often done without any
external
involvement, except perhaps prayer beads to count prayers. Meditation often involves
invoking or
cultivating a feeling or internal state, such as compassion, or attending to a specific focal
point.
The term can refer to the state itself, as well as to practices or techniques employed to
cultivate the state.
There are hundreds of specific types of meditation. The word, 'meditation,' means many
things dependent upon the context of its use. People practice meditation for many reasons,
within the context of their social environment. editation is a component of many religions,
and has been practiced since antiquity, particularly by monastics. A 2007 study by the U.S.
government found that nearly 9.4% of U.S. adults (over 20 million) have used meditation
within the past 12 months, up from 7.6% (more than 15 million people) in 2002. To date, the
exact mechanism at work in meditation remains unclear, while scientific research continues.
Ans.: Consumer right is defined as 'the right to be informed about the quality, quantity,
potency, purity, standard and price of goods or services, as the case may be, so as to protect
the consumer against unfair trade practices' Even though strong and clear laws exist in India
to protect consumer rights, the actual plight of Indian consumers could be declared as
completely dismal. Very few consumers are aware of their rights or understand their basic
consumer rights. Of the several laws that have been enacted to protect the rights of
consumers in India, the most significant is the Consumer
Protection Act, 1986. Under this law, everyone, including individuals, a Hindu undivided
family, a firm, and a company, can exercise their consumer rights for the goods and services
purchased by them. It is important that, as consumers, we know at least our basic rights and
about the courts and procedures that deal with the infringement of our rights.
In general, the rights of consumers in India can be listed as under:
* The right to be protected from all types of hazardous goods and services
* The right to be fully informed about the performance and quality of all goods and services
* The right to free choice of goods and services
* The right to be heard in all decision-making processes related to consumer interests
* The right to seek redressal, whenever consumer rights have been infringed
* The right to complete consumer education
The Consumer Protection Act, 1986 and various other laws like the Standards, Weights &
Measures Act have been formulated to ensure fair competition in the market place and free
flow
of true information from the providers of goods and services to those who consume them.
However, the success of these laws would depend upon the vigilance of consumers about
their
rights, as well as their responsibilities. In fact, the level of consumer protection in a country is
considered as the correct indicator of the extent of progress of the nation.
The production and distribution systems have become larger and more complicated today.
The
high level of sophistication achieved by the providers of goods and services in their selling
and
marketing practices and various types of promotional activities like advertising resulted in an
increased need for higher consumer awareness and protection. In India, the government has
realized the plight of Indian consumers and the Ministry of Consumer Affairs, Food and Public
Distribution has established the Department of Consumer Affairs as the nodal organization for
the protection of consumer rights, redressal of all consumer grievances and promotion of
standards governing goods and services offered in India.
A complaint for infringement of consumer rights could be made under the following
circumstances in the nearest designated consumer court:
* The goods or services bought by a person or agreed to be bought by a person suffer from
one or
more deficiencies or defects in any respect
* A trader or a service provider resorting to restrictive or unfair trade practices
* A trader or a service provider charging a price in excess of the price displayed on the goods
or
the price that had been agreed upon between the parties or the price that had been
stipulated under any law in force
* Goods or services that pose a hazard to the safety and life of a person offered for sale,
knowingly or unknowingly, causing injury to health, safety or life.
Consumerdaddy.com is India's only online consumer protection site offering consumer report,
consumer review and different opinions on different products and companies.
Q.2 a. What kinds of rights are considerable under consumer rights? [5 marks]
b. Distinguish between Memorandum of Association and Articles of Association. [6 marks]
Q.3 a. Identify the types of evidence which are relied upon by complainants to establish
defect in product. [3 marks]
b. Write a short note on unfair trade practices and Restrictive trade practice. [7 marks]
3. Write a short note on unfair trade practices and Restrictive trade practice.
Ans.: Unfair trade practices:
The law of unfair competition serves five purposes. First, the law seeks to protect the
economic,
intellectual, and creative investments made by businesses in distinguishing themselves and
their
products. Second, the law seeks to preserve the good will that businesses have established
with
consumers. Third, the law seeks to deter businesses from appropriating the good will of their
competitors. Fourth, the law seeks to promote clarity and stability by encouraging consumers
to
rely on a merchant's good will and reputation when evaluating the quality of rival products.
Fifth,
the law seeks to increase competition by providing businesses with incentives to offer better
goods and services than others in the same field.
Although the law of unfair competition helps protect consumers from injuries caused by
deceptive
trade practices, the remedies provided to redress such injuries are available only to business
entities and proprietors. Consumers who are injured by deceptive trade practices must avail
themselves of the remedies provided by state and federal Consumer Protection laws. In
general,
businesses and proprietors injured by unfair competition have two remedies: injunctive relief
(a
court order restraining a competitor from engaging in a particular fraudulent or deceptive
practice)
and money damages (compensation for any losses suffered by an injured business).
General Principles
The freedom to pursue a livelihood, operate a business, and otherwise compete in the
marketplace is essential to any free enterprise system. Competition creates incentives for
businesses to earn customer loyalty by offering quality goods at reasonable prices. At the
same
time, competition can also inflict harm. The freedom to compete gives businesses the right to
lure
customers away from each other. When one business entices enough customers away from
competitors, those rival businesses may be forced to shut down or move
The law of unfair competition will not penalize a business merely for being successful in the
marketplace. Nor will the law impose liability simply because a business is aggressively
marketing its product. The law assumes, however, that for every dollar earned by one
business, a
competitor will lose a dollar. Accordingly, the law prohibits a business fromu nf a irly profiting
at a
competitor's expense. What constitutes unfair competition varies according to the Cause of
Action
asserted in each case. These include actions for the infringement of Patents, Trademarks, and
copyrights; actions for the wrongful appropriation of Trade Dress, trade names, trade secrets,
and
service marks; and actions for the publication of defamatory, false, and misleading
representations.
Restrictive trade practice:
The restrictive trade practices, or antitrust, provisions in the Trade Practices Act are aimed at
deterring practices by firms which are anti-competitive in that they restrict free competition.
This
part of the act is enforced by the Australian Competition and Consumer Commission (ACCC).
The ACCC can litigate in the Federal Court of Australia, and seek pecuniary penalties of up to
$10 million from corporations and $500,000 from individuals. Private actions for
compensation
may also be available.
These provisions prohibit:
• Most Price Agreements (see Cartel and Price-Fixing)
• Primary boycotts (an agreement between parties to exclude another)
• Secondary boycotts whose purpose is to cause substantial lessen competition (Actions
between two persons engaging in conduct hindering 3rd person from supplying or
acquiring goods or services from 4th)
• Misuse of market power – taking advantage of substantial market power in a particular
market, for one or more proscribed purposes; namely, to eliminate or damage an actual
or potential competitor, to prevent a person from entering a market, or to deter or prevent
a person from engaging in competitive conduct.
Exclusive dealing – an attempt to interfere with freedom of buyers to buy from other
suppliers, such as agreeing to supply a product only if a retailer does not stock a
competitor’s product. Most forms of exclusive dealing are only prohibited if they have the
purpose or likely effect of substantially lessening competition in a market.
• Third-line forcing: A type of exclusive dealing, third-line forcing involves the supply of
goods or services on the condition that the acquirer also acquires goods or services from
a third party. Third-line forcing is prohibited per se.
• Resale price maintenance – fixing a price below which resellers cannot sell or advertise
• Mergers and acquisitions that would result in a substantial lessening of competition
A priority of ACCC enforcement action in recent years has been cartels. The ACCC has in place
an immunity policy, which grants immunity from prosecution to the first party in a cartel to
provide
information to the ACCC allowing it to prosecute. This policy recognizes the difficulty in
gaining
information/evidence about price-fixing behaviours
Q.4. Present a detail note on Shops and Establishment Act. [10 marks]
Ans.: Shops and Establishment Act:
Objectives
- To provide statutory obligation and rights to employees and employers in the unorganized
sector of employment, i.e., shops and establishments.
Scope And Coverage
- A state legislation; each state has framed its own rules for the Act.
- Applicable to all persons employed in an establishments with or without wages, except the
members of the employer's family.
- State government can exempt, either permanently or for a specified period, any
establishments
from all or any provisions of this Act.
Main Provisions
- Compulsory registration of shop/establishment within thirty days of commencement of work.
- Communications of closure of the establishment within 15 days from the closing of the
establishment.
- Lays down the hours of work per day and week.
- Lays down guidelines for spread-over, rest interval, opening and closing hours, closed days,
national and religious holidays, overtime work.
- Rules for employment of children, young persons and women
- Rules for annual leave, maternity leave, sickness and casual leave, etc.
- Rules for employment and termination of service.
- Maintenance of registers and records and display of notices.
- Obligations of employers.
- Obligations of employees.
About What:
1.To regulate conditions of work and employment in shops, commercial establishments,
residential hotels, restaurants, eating houses, theatres, other places of public
entertainment and other establishments.
2.Provisions include Regulation of Establishments, Employment of Children, Young
Persons and Women, Leave and Payment of Wages, Health and Safety etc.
Applicability & Coverage:
1. It applies to all local areas specified in Schedule-I
2.Establishment means any establishment to which the Act applies and any other such
establishment to which the State Government may extend the provisions of the Act by
notification
3.Employee means a person wholly or principally employed whether directly or through any
agency, whether for wages or other considerations in connection with any establishment
4. Member of the family of an employer means, the husband, wife, son, daughter, father,
mother, brother or sister and is dependent on such employer
Returns:
1. Form-A or Form-B (as the case may be) {Section 7(2)(a), Rule 5}
Before 15th December of the calendar year, i.e. 15 days before the expiry date
The employer has to submit these forms to the authority notified along with the old
certificate of registration and the renewal fees for minimum one year’s renewal and
maximum of three year’s renewal
Form-E (Notice of Change) {Rule 8}
Within 15 days after the expiry of the quarter to which the changes relate in respect of
total number of employees qualifying for higher fees as prescribed in Schedule-II and in
respect of other changes in the original statement furnished within 30 days after the
change has taken place. (Quarter means quarter ending on 31st March, 30th June, 30th
September and 31st December)
Registers:
1. Form-A {Rule 5}
Register showing dates of Lime Washing etc
2.Form-H, Form-J {Rule 20(1)} (if opening & closing hours are ordinarily uniform)
Register of Employment in a Shop or Commercial Establishment
3. Form-I {Rule 20(3)}, Form-K (if opening & closing hours are ordinarily uniform)
Register of Employment in a Residential Hotel, Restaurant, Eating-House, Theatre, or
other places of public amusement or entertainment
4. Form-M {Rule 20(4)}
Register of Leave – This and all the above Registers have to be maintained by the
Employer
5. Visit Book
This shall be a bound book of size 7” x 6” containing at least 100 pages with every
second page consecutively numbered, to be produced to the visiting Inspector on
demand. The columns shall be:
i. Name of the establishment or Employer
ii. Locality
iii.Registration Number
iv.Date and
v. Time
Q. 5 a. What is a cyber crime? What are the categories of cyber crime? [8 marks]
b. Mention the provisions covered under IT Act? [2 marks]
Ans.: Cyber crime
It refers to all the activities done with criminal intent in cyberspace or using the medium of
Internet. These could be either the criminal activities in the conventional sense or activities,
newly
evolved with the growth of the new medium. Any activity, which basically offends human
sensibilities, can be included in the ambit of Cyber crimes.
Because of the anonymous nature of Internet, it is possible to engage in a variety of
criminal activities with impunity, and people with intelligence, have been grossly misusing
this
aspect of the Internet to commit criminal activities in cyberspace. The field of cyber crime is
just
emerging and new forms of criminal activities in cyberspace are coming to the forefront each
day.
For example, child pornography on Internet constitutes one serious cyber crime. Similarly,
online
pedophiles, using Internet to induce minor children into sex, are as much cyber crimes as any
others.
Categories of cyber crimes:
Cyber crimes can be basically divided in to three major categories:
1. Cyber crimes against persons;
2. Cyber crimes against property; and
3. Cyber crimes against government.
1. Cyber crimes against persons: Cyber crimes committed against persons include various
crimes like transmission of child-pornography, harassment of any one with the use of a
computer
and cyber stalking. The trafficking, distribution, posting, and dissemination of obscene
material
including pornography, indecent exposure, and child pornography constitute the most
important
cyber crimes known today. These threaten to undermine the growth of the younger
generation
and also leave irreparable scars on the minds of the younger generation, if not controlled.
Similarly, cyber harassment is a distinct cyber crime. Various kinds of harassments can
and do occur in cyberspace, or through the use of cyberspace. Harassment can be sexual,
racial,
religious, or of any other nature. Cyber harassment as a crime also brings us to another
related
area of violation of privacy of citizens. Violation of privacy of online citizens is a cyber crime of
a
grave nature.
Cyber stalking: The Internet is a wonderful place to work, play and study. The net is merely a
mirror of the real world, and that means it also contains electronic versions of real life
problems.
Stalking and harassment are problems that many persons especially women, are familiar
within
real life. These problems also occur on the Internet, in the form of “cyber stalking” or “online
harassment”.
2. Cyber crimes against property: The second category of Cyber crimes is Cyber crimes
against all forms of property. These crimes include unauthorized computer trespassing
through
cyberspace, computer vandalism, and transmission of harmful programs and unauthorized
possession of computerized information.
3. Cyber crimes against Government: The third category of Cyber crimes is Cyber crimes
against Government. Cyber Terrorism is one distinct kind of crime in this category. The
growth of
Internet has shown that individuals and groups to threaten international governments as also
to
terrorize the citizens of a country are using the medium of cyberspace. This crime manifests
itself
into Cyber Terrorism when an individual “cracks” into a government or military maintained
website, for the purpose of perpetuating terror.
Since Cyber crime is a newly emerging field, a great deal of development has to take
place in terms of putting into place the relevant legal mechanism for controlling and
preventing
cyber crime. The courts in United States of America have already begun taking cognizance of
various kinds of fraud and cyber crimes being perpetrated in cyberspace. However, much
work
has to be done in this field. Just as the human mind is ingenious enough to devise new ways
for
perpetrating crime, similarly, human ingenuity needs to be canalized into developing effective
legal and regulatory mechanisms to control and prevent cyber crimes. A criminal mind can
assume very powerful manifestations if it is used on a network, given the reachability and size
of
the network.
Legal recognition granted to Electronic Records and Digital Signatures would certainly
boost E – Commerce in the country. It will help in conclusion of contracts and creation of
rights
and obligations through electronic medium. In order to guard against the misuse and
fraudulent
activities over the electronic medium, punitive measures are provided in the Act. The Act has
recognized certain offences, which are punishable. They are: -
Tampering with computer source documents (Sec 65)
Any person, who knowingly or intentionally conceals, destroys or alters or intentionally or
knowingly causes another person to conceal, destroy or alter any -
i. Computer source code when the computer source code is required to be
kept by law for the time being in force,
ii. Computer programme,
iii. Computer system and
iv. Computer network.
- Is punishable with imprisonment up to three years, or with fine, which may extend up to two
lakh
rupees, or with both.
Hacking with computer system (Sec 66):
Hacking with computer system is a punishable offence under the Act. It means any person
intentionally or knowingly causes wrongful loss or damage to the public or destroys or deletes
or
alters any information residing in the computer resources or diminishes its value or utility or
affects it injuriously by any means, commits hacking.
Such offenses will be punished with three years imprisonment or with fine of two lakh
rupees or with both.
Publishing of information which is obscene in electronic form (Sec 67): Whoever publishes
or transmits or causes to be published in the electronic form, any material which is lascivious
or
appeals to prurient interest or if its effect is such as to tend to deprave and corrupt persons
who
are likely, having regard to all relevant circumstances, to read, see or hear the matter
contained
or embodied in it shall be punished on first conviction with imprisonment for a term extending
up
to 5 years and with fine which may extend to one lakh rupees. In case of second and
subsequent
conviction imprisonment may extend to ten years and also with fine which may extend up to
two
lakh rupees.
Failure to comply with orders of the controller by a Certifying Authority or any employee of
such authority (Sec 68):
Failure to comply with orders of the Controller by any Certifying Authority or by any
employees of
Certifying Authority is a punishable offence. Such persons are liable to imprisonment for a
term
not exceeding three years or to a fine not exceeding two lakh rupees or to both.
Fails to assist any agency of the Government to decrypt the information (Sec 69):
If any subscriber or any person-in-charge of the computer fails to assist or to extend any
facilities
and technical assistance to any Government agency to decrypt the information on the orders
of
the Controller in the interest of the sovereignty and integrity of India etc. is a punishable
offence
under the Act. Such persons are liable for imprisonment for a term, which may extend to
seven
years.
Unauthorized access to a protected system (Sec 70):
Any person who secures access or attempts to secure access to a protected system in
contravention of the provisions is punishable with imprisonment for a term which may extend
to
ten years and also liable to fine.
Misrepresentation before authorities (Sec 71):
Any person who obtains Digital Signature Certificate by misrepresentation or suppressing any
material fact from the Controller or Certifying Authority as the case may be punished with
imprisonment for a term which may extend two years or with fine up to one lakh rupees or
with
both.
Breach of confidentiality and privacy (Sec 72):
Any person in pursuant of the powers conferred under the act, unauthorized secures access,
to
any electronic record, books, register, correspondence, information, document or other
material
without the consent of the person concerned discloses such materials to any other person
shall
be punished with imprisonment for a term which may extend to two years, or with fine up to
one
lakh rupees or with both.
Publishing false particulars in Digital Signature Certificate (Sec 73):
No person can publish a Digital Signature Certificate or otherwise make it available to any
other
person with the knowledge that: -
a.
The Certifying Authority listed in the certificate has not issued it; or
b.
The subscriber listed in the certificate has not accepted it; or
c. The certificate has been revoked or suspended
Unless such publication is for the purpose of verifying a digital signature created prior to such
suspension or revocation. Any person who contravenes the provisions shall be punishable
with
imprisonment for a term, which may extend to two years or with fine up to rupees one lakh or
with
both.
b. Mention the provisions covered under IT Act?
Ans.: IT Act:
Publication of Digital Signature Certificate for fraudulent purpose (Sec 74):
Any person knowingly creates, publishes or otherwise makes available a Digital Signature
Certificate for any fraudulent or unlawful purpose shall be punished with imprisonment for a
term
which may extend to two years or with fine up to one lakh rupees or with both.
Search and Arrest
Any Police Officer not below the rank of a Deputy Superintendent of Police or any other officer
of
the Central Government or a State Government authorized in this behalf may enter any public
place, search and arrest without warrant any person found therein who is reasonably
suspected
or having committed or of committing or of being about to commit any offence under this Act.
Ans.: Redressal forum: Twenty-five years ago, consumer action in India was virtually unheard
of. It consisted of some action by individuals, usually addressing their own grievances. Even
this
was greatly limited by the resources available with these individuals. There was little
organized
effort or attempts to take up wider issues that affected classes of consumers or the general
public.
All this changed in the Eighties with the Supreme Court-led concept of public interest
litigation. It
gave individuals and the newly formed consumer groups, access to the law and introduced in
their work the broad public interest perspective.
Telepress Features
Several important legislative changes took place during this period. Significant were the
amendments to the Monopolies and Restrictive Trade Practices Act (hereafter "MRTP Act")
and
the Essential Commodities Acts, and the introduction of the Environment Protection Act and
the
Consumer Protection Act. These changes shifted the focus of law from merely regulating the
private and public sectors to actively protecting consumer interests.
The Consumer Protection Act, 1986 (hereafter "the Act") is a remarkable piece of legislation
for
its focus and clear objective, the minimal technical and legalistic procedures, providing access
to
redressal systems and the composition of courts with a majority of non-legal background
members.
The Act establishes a hierarchy of courts, with at least one District Forum at the district level
(Chennai has two), a State Commission at the State capitals and the National Commission at
New Delhi. The pecuniary jurisdiction of the District Forum is up to Rs. one lakh and that of
the
State Commission is above Rs. one lakh and below Rs. 10 lakhs. All claims involving more
than
Rs. 10 lakhs are filed directly before the National Commission. Appeals from the District
Forum
are to be filed before the State Commission and from there to the National Commission,
within
thirty days of knowledge of the order.
How to make a complaint
This section explains how to make a complaint using our Complaints Registration Form. It tells
you what information you need to include on the form, and where you need to send your
completed form.
Definition of a complaint
The UK Border Agency defines a complaint as “any expression of dissatisfaction about the
services provided by or for the UK Border Agency and/or about the professional conduct of UK
Border Agency staff, including contractors.”
The following will not be treated as complaints:
• Letters relating to the decision to refuse a UK visa. Visa applicants are expected to raise
this using the existing appeal channels.
• Letters-chasing progress on an application unless it is outside our published processing
times.
What information should you send?
You should make your complaint using our Complaints Registration Form.
It is important that you give as much information about yourself as possible. The Complaints
Registration Form tells you the type of information we need. This will help us to find the
information relevant to your case and to contact you about it. If possible you should also
include:
• Full details about the complaint (including times, dates and locations);
• The names of any UK Border Agency / Visa Application Centre staff you have dealt with;
• Details of any witnesses to the incident (if appropriate);
• Copies of letters or papers that are relevant; and
• Any travel details that relate to your complaint.
What happens next?
The 'How we will deal with your complaint' page explains:
• How we handle your complaint
• What to do if you are not happy with the outcome of your complaint or how we have
handled it
• What will happen after your complaint has been dealt with
Early 20th Century: During 20th century around the time of world war I, the products
had become more complex requiring deployment of complex manufacturing processes.
This period also witnessed the introduction of wide spread mass production and piece
work. As the workers were paid on the number of pieces made, a tendency was
developed among the workers to strive to push more products to earn extra, resulting
in pushing the products with bad workmanship to the assembly lines/ customers.
To counter the above tendency fulltime inspectors were introduced to identify
quarantine and correct the defects. Quality Control in this method of inspections during
1920s and 30s led to the formal establishment of quality inspection functions,
seperated from production.
During 1930s mostly USA, the importance of quality has gained and efforts involved in
rework and cost of scraps started getting some attention. This has led to the
development of systematic approach to quality. The mass production has grown to
such an extent that prevalent quality control mehtod – inspection of every product
produced and become too cumbersome. At this point, statistical quality control (SQC)
had come into beng. The introduction of statistical quality control is credited to Walter
A Shewhart of Bell Labs.
SQC came about with the realization that quality cannot be fully inspected into an
important batch of products. SQC introduced to the inspectors control tools such as
Sampling and Control Charts were 100% inspection is not practicable. The Statistical
Techniques allow inspection and test of certain proportion of products (sample) for
quality to get the desired level of confidence in the quality of the entire batch or lot of
production.
2."Adopt the new philosophy". The implication is that management should actually adopt
his philosophy, rather than merely expect the workforce to do so.
4."Move towards a single supplier for any one item." Multiple suppliers mean variation
between feedstocks.
6."Institute training on the job". If people are inadequately trained, they will not all work
the same way, and this will introduce variation.
8."Drive out fear". Deming sees management by fear as counter- productive in the long
term, because it prevents workers from acting in the organisation's best interests.
9."Break down barriers between departments". Another idea central to TQM is the
concept of the 'internal customer', that each department serves not the management, but
the other departments that use its outputs.
10."Eliminate slogans". Another central TQM idea is that it's not people who make most
mistakes - it's the process they are working within. Harassing the workforce without
improving the processes they use is counter-productive.
In 1950, JUSE ( Union of Japanese Scientists and Engineers) have established the
Deming Prize to repay him for his friendship and contribution. The deming Prize to repay
him for his friendship and contribution. The Deming Prize – Particularly, the Deming
Application prize given to companies has exerted immense influence on the development
of Quality Movement in Japan.
It is Joseph Juran who has been credited for adding the human dimension quality. In his
opinion, it is the human relations problems – resistance to change (Cultural resistance) is
one of the main causes for the quality issues. He has pushed for training and education of
the Managers on quality aspects. Juran also developed the Juran’s triology’ as cross
functional management comprising of three managerial processes – Quality Planning,
Quality Control and Quality Imrpovement. Juran is also credited with the application of
Pareto Principle (Vital few, trivial Many) to handle the quality issues
Even while bring influenced by deming and Juran’s ideas on quality, the Japanese began
developing their own contributions towards quality improvements. JUST IN TIME (JIT)
concept propounded and implemented by Taichi Ohno of Toyota and Shigeo Shingo has
challenged the traditional understanding of production management revolutionizing the
relationship between the manufacturer and its supplier
Shiego Shingo has also developed the concept of Poka Yoka (Mistake Proofing), an
important component of Toyota production system. Poka Yoke involves devising behavior
shaping constraint or methods in which the operations are carried out so that three is no
scope for occurence of inadvertent mistakes
Kaoru Ishikawa is another Japanese pioneer, who has contributed immensely towards
the development of Quality Circles and the development of Quality tool – Ishikawa
Diagram, popularly known as Fish and Bone diagram (Tool for identifying possible causes
for the problem)
In fact, some of the highly successful quality initiatives have been invented by the
japanese. Among them are Taguchi Methods, Poka Yoke, Quality Function Deployment
(QFD), Toyota Production system and others. Many of these methods provide techniques
but also associated quality culture aspects
"We are what we repeatedly do. Excellence, therefore, is not an act, but a habit."
- Aristotle 384BC-322BC
Dr.Kaoru Ishikawa's contributions to total quality concept and his emphasis on the
human side of quality, the Ishikawa diagram and the assembly and the use of the
"seven basic tools of quality" are discussed in this article.
The business success of Indian companies and their success story in form of case
studies are briefly summarized in this article. Further an attempt in made to apply the
TQM concepts and tools and techniques to improve the quality performance to enable
companies to compete with the global markets and international firms by analyzing the
importance of process management .
DEFINITION OF TQM
TQM can be defined as " an organization wide effort to develop systems, tools,
techniques, skills and the mindset to establish a quality assurance system that
is responsive to the emerging market needs" – B.MAHADEVAN
The Japanese recognized these problems and their values concerned with
quality and continuous improvement the total quality management become
popular in 1950's as it tried to recover Japanese economy from the spoils of
World War II .During the 1980's japans' exports into the USA and Europe
increased significantly due to its cheaper, higher quality products, compared to
the western countries.
In the early 1980's, confederation of Indian industries (CII) took the initiatives
to set up TQM practices in India in 1982 quality circles were introduced for first
time in India. The companies under which the quality circles were launched are
Bharat Electronics Ltd, Bangalore and Bharat Heavy Electricals Ltd, Trichy. In
1986 CII invited professor Ishikawa to India, to address Indian Industry about
quality. In 1987, a TQM division was set up the CII this division had 21
companies agreed to contribute resources to it and formed the National
committee on quality"
In February 1991 an Indian company with assistance of the CII, obtained the
first ISO 9000 certification in India. In 1996, the Govt. of India announced the
setting up of quality council of India and a national agency for quality
certification was setup as a part of WTO agreement.
TQM Five main advantages:
1. Customer Focus:
2. Continuous Improvement:
3. Employee Empowerment:
TQM philosophy is to empower all employees to seek out quality problems and
correct them. Today workers are empowered with decision making power to
decide quality in the production process, their contributions are highly valued
and workers suggestions to improve quality are implemented. This employee
empowerment can be made through team approach have quality circle where a
team of volunteer production employees and their supervisors who meet
regularly to solve quality problems.
6.ProcessManagement:
7.ManagingSupplierQuality:
The philosophy of TQM extends the concept of quality to suppliers and ensures
that they engage in the same quality practices. If suppliers meet preset quality
standards, materials do not have to be inspected upon arrival. Today many
companies have a representative residing at their supplier's location, there by
involving the supplier in every stage from product design to final production.
PRINCIPLES OF TQM
1. Quality Integration
Dr. Ishikawa captures the spirit of TQM by saying "Quality means quality of
work, quality of service, quality of information, quality of process,quality of
divisions, quality of people including workers, engineers, managers and
executives, quality of objectives, briefly speaking it is Total Quality, or
companywide quality". The above definition show quality is integrated with
various activities.
2. Quality First
Customers are most important asset of the organization customers are both
outside customers who are clientele and within the organization they are
employees therefore Dr. Ishikawa proposes that manufacturers must study the
requirements of consumers and to consider their opinions when they design
and develop a product.
One of the core principle of TQM is a do it right the first time. So modern
approach argues to stop problems at the beginning rather than at ending their
Deming says ' inspection is too late, ineffective and costly'. The TQM approach
is to do it right the first time rather than to react after the problem happened,
problem prevention can be assured by controlling are process discovering
problems, identifying their root causes then improving the process in order to
avoid the problems.
Dr. Ishikawa proposes the following steps for conducting factual based decision
in order to ensure that any analysis has the right basis for decision making:
Dr.Kaoru Ishikawa
Dr. Ishikawa suggests seven tools and he believed these tools should be
known widely as 'seven basic tools of quality' they are:
1. Pareto analysis
2. Stratification
The main objective of stratification is to grasp a problem or to analyze its
causes by looking at possible and under standable factors or items. Eg: Data
collected of a single population is divided by time, work force, machinery
working methods, raw materials, so on in to a number of stratums to find
certain characteristics among the date or they are same or similar.
3. Histograms
4. Scatter Diagrams
Process control charts are the most complicated of the seven basic tools of
TQM. These tools are part of statistical process control; the charts are made by
plotting in sequence the measured value of samples taken from a process.
6. Check sheet:
Check sheet are forms used to collect data in an organized manner. They are
used to validate problems or causes or to check progress during
implementation of solutions. Check sheets come in several types, depending
on the objective for collection data. Some of the types are:
The cause and effect diagram is the brilliant scientific diagram by Dr. Kaoru
Ishikawa who pioneered quality management processes and became one of the
founding father of modern quality management.
The Cause and Effect diagram is used to identify all the potential causes that
result in a single effect. Firstly all causes are arranged on the basics of their
level of importance and resulting in a depiction of relationships and hierarchy
of events by this the root cause is identified the areas of occurrence of
problems is found with the use of Ishikawa diagram.
When there is a team approach to problem solving the Ishikawa diagram is the
powerful tool to capture different ideas and stimulate the team's brainstorming
to diagram is also called fish bone diagram or cause and effect diagram.
The fishbone diagram expresses the various causes to specific problem and its
effect if a quantitative data is available it is a comprehensive tool for in-depth
analysis.
TQM is the foundation of activities, which include:
- Commitment by Senior Management and all Employees
- Meeting Customer Requirements
- Just In Time
- Improvement teams
- Reducing product and service costs
- Employee Involvement and Empowerment
- Recognition and celebration
- Challenging Quantified goals and benchmarking
- Focuss on processess/ Improvement plans
- Specific Incorporation in strategic planning
Continous Improvement through TQM:
TQM is mainly concerned with continous improvement in all work from high level
strategic planning and decision-making, to detailed execution of work elements on the
shop floor. It stems from belief that mistakes can be avoided and defects can be
prevented. It leads to continously improving results, in all aspects of work, as a result
of continously improving capabilities, people, processes and technology and machine
capabilities.
A principle of TQM is that mistakes may be made by people, but most of them are
caused, or at least permitted, by faulty systems and processes. This means that root
cause of such mistakes can be identified and eliminated, and repetition can be
prevented by changing the process.
There are three major mechanisms of prevention:
a. Mistake proofing – Poka-Yoke
b. Where mistakes can’t be absolutely prevented, detecting them early to prevent
them being passed down the value added chain (inspection at source or by the
next operation)
c. Where mistakes recur, stopping production until the process can be corrected, to
prevent the production of more defects (stop in time)
2.What is relevance of Customer satisfaction in Quality
Management? What is Customer Relationship Management?
Explain.
Customer Satisfaction
A person who employs the service or buys the product is often termed as consumer or
customer. Two types of customers are identified by the customers: External and Internal.
* Internal customers are those lying within the organization like engineering, order
processing etc and
* The external customers are those who are outside the organization and buy products and
services of the organization.
Famou quote from Mahatma Gandhi ‘A Customer is the most important visitor on our
premises. He is not dependent on us, we are dependant on him. He is not interruption in our,
he is the purpose of it. He is not an outsider in our business, he is part of it. We are not doing
him a favor by serving him, but he is doing a favor by giving us an oppurtunity to do so’
Customer satisfaction is an important factor for the botton line. A study has shown that a
company with 98% customer retention rate was twice as profitable as those 94%. Studies
have also shown that dissatisfied customr tell atleast twice as many friends about has
experiences than they tell about good ones.
‘customer loyalty’ is one of the driving factors to achieve sustained profitability and
increased market share. Loyal customers are those, who stay with the company and make
positive referrals. Satisfaction and loyalty are different concepts. Customers who are merely
satisfied may often purchase from competitiors becasue of convenience, promotions and
other factors. Loyal customers place priority on doing business with a particular company,
will often go out of the way or pay premium to stay with the company. Loyal customers
spend more, are willing to pay higher prices, refer new clients and are less costly to do
business with.
In addition to value, satisfaction and loyalty are influenced greatly by serving quality,
integrity and relationships that organizations build with customers. One study was found that
customers are 5 times more likely to switch because of perceived service problems than for
price concerns or product quality issues.
The above diagram depicts the process through which customer needs and expectations are
translated into inuts from design, production and delivery processes
True customer needs and expectations might be called – Expected Quality. This was
customer assumes will be received from the product, these needs and expectations are
translated in to specifications for products and services (Design Quality)
Actual Quality is the outcome of production process and what is delivered to customer.
However, actual quality may differ considerably from expected quality if information on
expectations is misinterpreted or gets lost from one step to the next step in the flow map.
Customer assesess quality and develop perceptions (Perceived Quality) by comparing their
expectations (Expected Quality) with what they receive (Actual Quality). If the actual quality
fails to match the expected quality, then the customer will be dissatisfied. On the other hand,
if the actual quality exceeds expecations, then customer will be satisfied or even surprisingly
delighted. As the perceived quality drives the customer behavvior, producers should make
every effort to ensure that actual quality conforms to the expected quality
It should be noted that the customer perceptions are not statis but keeping changing over
time. Hence rpoducers are expected to be on their toes to meet the expecations
The producers are required to look at the processes through the customers eyees and not the
organizations. An organizations focus is often reflected by measures that it uses to evaluate
its performance. When a organizations focus is on such things are production schedules and
costs productivity or output quantity rather than ease of production use, availability or cost, it
is difficult to create customer – focussed culture
As the customer's need, expectation and values keep on changing, there is no such picture of
customer's quality need. As according to ASQ, survey, important factors for purchase for the
customer are:
* Features
* Performance
* Price
* Service
* Reputation and
* Warranty
Methodology for feedback involves comment card, survey, focus group, toll free numbers,
report card, internet, customer visits, employee feedback and using standard indexes like
ACSI of "American Customer Satisfaction Index". ASCI allow contrast in between company
and industry averages.
Studies suggest that the customer who did not complain is most prone to switch to another
product. Every individual complaint is needed to be entertained. Results also suggest that half
of the dissatisfied-customers will buy again if they feel that their complaint had been
addressed.
Service Quality:
Kano model is the most basic conceptualization of customer requirement. There are three
lines-red, blue and green to explain its ideology. The red line shows innovation, blue shows
spoken and expected requirement and green line shows unspoken and expected requirements.
Kano model is based on an assumption that a customer buys when he needs something,
however is it not completely true, an organization must overflow the customer needs. This
can be understood by "Voice of the customer" concept.
Customer Retention:
It is more powerful and efficient in company's point of view as with customer satisfaction. It
is involved with the activities which basically are related to customer satisfaction in order to
increase the loyalty of the customers towards the company.
It moves customer satisfaction to next level by determining what is actually important for the
customer.
CRM stands for Customer Relationship Management. It is a process or methodology used to learn more
about customers' needs and behaviors in order to develop stronger relationships with them. There are
many technological components to CRM, but thinking about CRM in primarily technological terms is a
mistake. The more useful way to think about CRM is as a process that will help bring together lots of pieces
of information about customers, sales, marketing effectiveness, responsiveness and market trends.
CRM helps businesses use technology and human resources to gain insight into the behavior of customers
and the value of those customers
Philip Crosby has pointed out that Higher quality means, lower costs and expensive. In fact, efficient
processes consume more resources and produce more wastes there by increasing the costs. In his
famous book, ‘quality is free’ Crosby give umpteen examples that out come of well run processes result
in optimum costs and the inefficient processes result in higher costs. Systematic quality improvements
ensure efficient processes which in turn consume optimum resources. Reduce wastes to minimum or nil.
It is generally known that if the results of inefficient process are converted in to monetary terms and
reported to top managemetn, it will attract immediate attention and convey the gravity of situations,
similarly benefits of improved process through quality improvement programs can be converted into
monetary benefits and highlight the benefits of such improvement initiatives. The results of processes
converted in to monetary terms are referred as ‘cost of quality’
The concept emerged in 1950s, and was first described by Armand Feigenbaum
The "cost of quality" isn't the price of creating a quality product or service. It's the cost of NOT creating
a quality product or service.
Every time work is redone, the cost of quality increases. Obvious examples include:
In short, any cost that would not have been expended if quality were perfect contributes to the cost of
quality.
Prevention Costs
The costs of all activities specifically designed to prevent poor quality in products or services.
Appraisal Costs
The costs associated with measuring, evaluating or auditing products or services to assure conformance
to quality standards and performance requirements.
Failure Costs
The costs resulting from products or services not conforming to requirements or customer/user needs.
Failure costs are divided into internal and external failure categories.
Failure costs occurring prior to delivery or shipment of the product, or the furnishing of a service, to the
customer.
• Scrap
• Rework
• Re-inspection
• Re-testing
• Material review
• Downgrading
Failure costs occurring after delivery or shipment of the product — and during or after furnishing of a
service — to the customer.
The sum of the above costs. This represents the difference between the actual cost of a product or
service and what the reduced cost would be if there were no possibility of substandard service, failure of
products or defects in their manufacture.
As defined by Crosby ("Quality Is Free"), Cost Of Quality (COQ) has two main
components: *Cost Of Conformance* and *Cost Of Non-Conformance* (see
respective definitions).
Cost of quality is the amount of money a business loses because its product or service
was not done right in the first place. From fixing a warped piece on the assembly line
to having to deal with a lawsuit because of a malfunctioning machine or a badly
performed service, businesses lose money every day due to poor quality. For most
businesses, this can run from 15 to 30 percent of their total costs.
COPQ consists of those costs which are generated as a result of producing defective material.
This cost includes the cost involved in fulfilling the gap between the desired and actual
product/service quality. It also includes the cost of lost opportunity due to the loss of
resources used in rectifying the defect. This cost includes all the labor cost, rework cost,
disposition costs, and material costs that have been added to the unit up to the point of
rejection. COPQ does not include detection and prevention cost.
COPQ is annual monetary loss of products and processes that are not achieving their quality
objectives. The COPQ is not limited to quality but essentially the cost of waste associated in
poor performance process
There usually goes confusion between these two definitions COQ and COPQ.
Cost of quality (COQ) is actually a phrase coined by Philip Crosby, noted quality
expert and author and originator of the “zero defects” concept, to refer to the costs
associated with providing poor-quality products or services. Many quality
practitioners thus prefer the term cost of poor quality (COPQ).
To illustrate the impact this can have on the workforce can not be underestimated.
With a little imagination you could imagine the impact of storing all scrap for one year
then presenting it as an "in-your-face" visual presentation tactic at a business
improvement initiative.
Cost Of Conformance
(COC) A component of the *Cost Of Quality* for a work product. Cost of conformance is the
total cost of ensuring that a product is of good *Quality*. It includes costs of *Quality
Assurance* activities such as standards, training, and processes; and costs of *Quality
Control* activities such as reviews, audits, inspections, and testing.
Cost Of Non-Conformance
(CONC.) The element of the *Cost Of Quality* representing the total cost to the organisation
of failure to achieve a good *Quality* product.
CONC includes both in-process costs generated by quality failures, particularly the cost of
*Rework*; and post-delivery costs including further *Rework*, re-performance of lost work
(for products used internally), possible loss of business, possible legal redress, and other
potential costs.
• Prevention Cost
• Appraisal Cost
The minimum total cost,for example is shown below as being achieved at 98%
perfection. This percentage is also known as best practice. That is, the cost of
achieving an improvement outweighs the benefits of that improvement.
In my previous post, similar cost curve created doubts to few as the X-axis was
labelled "Defects" there. In fact it is the Degree of perfection achieved towards the
individual curves like Appraisal, Prevention etc. In few instances, this axis is taken
with increasing Quality of Design.
Allocation of Overheads to Cost of Quality:
This issues often come up while calculating cost of quality. Any of the
following 3 approaches may be followed. However, it is particularly
important to strictly adhere to the approach selected for over a period of
time
a. Include total overheads using direct labour or some
other base
b. Include the variable overhead only
c. Do not include any overhead
Allocation of overhead has an impact on total costs of quality and on
distrbution over various departments. Activity Baesd costing (ABC) can
help to provide realistic allocation of overhead costs
Lack of Communication:
- Eliminating communication barriers between management can improve
production and increase product and service quality
- Communication barriers can prove detrimental particularly those that
isolate departments and individuals can prove detrimental
- Both employees and management need to know ‘what is going on to be
effective, so also customers and suppliers need to be involved
- communication being a management responsibility and one of the major
barriers to quality, top management should answer certain questions like
- does organization design support or inhibit quality achievement
- Is Quality recognized as a problem
- Is focus right for achieving quality
- Is mindset holistic or reductionist
- what is relative health of orgganization Quality Improvement
Initiative
- What are strength, ‘assets’ that support & reinforce quality
improvement efforts
- Areas failing in sustaining quality improvement Philosophy
- Are you seeing Quality results you had hoped for
Top Management’s commitment, support to total quality management
program and proper communication will soften the barriers and promote
team work
Removing barriers will increase employee’s perception of organization.
Work satisfaction will improve as employees will better use of their
resources
Incompatible Organizational Structure
- Autocratic organizational structure and policies can lead to
implementation problems. If organizational structure is a
problem and incompatible then it should be restructured for
expected outcomes
Isolated Individuals and Departments
- Teamwork is an essential part of TQM environment. When
TQM principles are used, isolation of individuals and
departments will dissolve overtime. Team work for problem
solving by using SQC tools will analyze root cause and easily
solve then, thus taking advantage of their team work efforts
Ineefective Measurement Technique, Lack of Data and Results
- Ineffective measurement techniques or absence of a
measurement process, maintaining inaccurate data, and lack
of access to data may run as counter measure to TQM
principles. Hence, quick easy access to data is important, data
retrieval must be efficient, not time consuming or labour
intensive
- Further, decision makers must also receive training in data
analysis and inter[retation so that the measurement system
will serve its intended purpose
Paying inadequate attention to internal and external customers
Organizations must pay attention to both internal and external customers
and understand their needs and expectations. Managers may assume
about customer needs and expectations, resulting in misdirected efforts
and investments. This brings in customer dissatisfaction and labour
mistrust
6. What are Quality Standards? Write a note on the ISO 9000 series of
Quality Standards and Malcolm Baldrige Criteria for Business Performance
Excellence
2. Explain the concept of Knowledge Management. With an example, describe the role of
“Quality” in Knowledge Management
Information relases to description, ‘ definition or perspective (what, who, when wehre)
Knowledge comprises strategy, practice, method or approach.
Knowledge management (KM) is a relatively new form of MIS that expands the concept to
include information systems that provide decision-making tools and data to people at all
levels of a company. The idea behind KM is to facilitate the sharing of information within a
company in order to eliminate redundant work and improve decision-making. KM becomes
particularly important as a small business grows. When there are only a few employees, they
can remain in constant contact with one another and share knowledge directly. But as the
number of employees increases and they are divided into teams or functional units, it
becomes more difficult to keep the lines of communication open and encourage the sharing of
ideas.
Dr. W. Edward Deming is best known for reminding management that most problems are
systemic and that it is management's responsibility to improve the systems so that workers
(management and non-management) can do their jobs more effectively. Deming argued that
higher quality leads to higher productivity, which, in turn, leads to long-term competitive
strength. The theory is that improvements in quality lead to lower costs and higher
productivity because they result in less rework, fewer mistakes, fewer delays, and better use
of time and materials. With better quality and lower prices, a firm can achieve a greater
market share and thus stay in business, providing more and more jobs.
When he died in December 1993 at the age of ninety-three, Deming had taught quality and
productivity improvement for more than fifty years. His Fourteen Points, System of Profound
Knowledge, and teachings on statistical control and process variability are studied by people
all over the world. His books include: Out of the Crisis (1986), The New Economics (1993),
and Statistical Adjustment of Data (1943).
In emphasizing management's responsibility, Deming noted that workers are responsible for
10 to 20 percent of the quality problems in a factory, and that the remaining 80 to 90 percent
is under management's control. Workers are responsible for communicating to management
the information they possess regarding the system. Deming's approach requires an
organization-wide cultural transformation.
Deming's quality methods centered on systematically tallying product defects, analyzing their
causes, correcting the causes, and recording the effects of the corrections on subsequent
product quality as defects were prevented. He taught that it is less costly in the long-run to get
things done right the first time then fix them later.
The son of a small-town lawyer, Deming (a teacher and consultant in statistical studies)
attended the University of Wyoming, University of Colorado, and Yale University, where he
earned his Ph.D. in mathematical physics. He then taught physics at several universities,
worked as a mathematical physicist at the U.S. Department of Agriculture and was a
statistical adviser for the U.S. Census Bureau.
From 1946 to 1993 he was a professor of statistics at New York University's graduate school
of business administration, and he taught at Columbia University. Deming became interested
in the use of statistical analysis to achieve better quality control in industry in the 1930s.
In 1950 Deming began teaching and consulting with Japanese industrialists through the
Union of Japanese Scientists and Engineers (JUSE). In 1960, he received the Second Order
Medal of the Sacred Treasure from the Emperor of Japan for improvement of quality and the
Japanese economy. In 1987 he received the National Medal of Technology from U. S.
President Ronald Reagan because of his impact on quality in the United States.
From 1946 to 1993, he was an international teacher and consultant in the area of quality
improvement based on statistics, leadership, and customer satisfaction. The Deming Prize for
quality was established in 1951 in Japan by JUSE and in 1980 in the United States by the
Metropolitan Section of the American Society for Quality.
American companies ignored Deming's teachings for years. In 1980, NBC aired the program
"If Japan Can, Why Can't We?," highlighting Deming's contributions in Japan and American
companies began to discover Deming. His ideas were used by major U.S. corporations as
they sought to compete more effectively against foreign manufacturers.
As a consultant, Deming continued to conduct Quality Management seminars until just days
before his death in 1993.
One of Deming's essential theories is his System of Profound Knowledge, which includes
appreciation for a system, knowledge about variation (statistics), theory of knowledge, and
psychology (of individuals, groups, society, and change). Although the Fourteen Points are
probably the most widely known of Dr. Deming's theories, he actually taught them as a part
of his System of Profound Knowledge. His knowledge system consists of four interrelated
parts: (1) Theory of Optimization; (2) Theory of Variation; (3) Theory of Knowledge; and (4)
Theory of Psychology.
THEORY OF OPTIMIZATION.
The objective of an organization is the optimization of the total system and not the
optimization of the individual subsystems. The total system consists of all constituents—
customers, employees, suppliers, shareholders, the community, and the environment. A
company's long-term objective is to create a win-win situation for all of its constituents.
Subsystem optimization works against this objective and can lead to a suboptimal total
system. According to Deming, it is poor management, for example, to purchase materials or
service at the lowest price or to minimize the cost of manufacturing if it is at the expense of
the system. Inexpensive materials may be of such inferior quality that they will cause
excessive costs in adjustment and repair during manufacturing and assembly.
THEORY OF VARIATION.
Deming's philosophy focuses on improving the product and service uncertainty and
variability in design and manufacturing processes. Deming believed that variation is a major
cause of poor quality. In mechanical assemblies, for example, variations from specifications
for part dimensions lead to inconsistent performance and premature wear and failure.
Likewise, inconsistencies in service frustrate customers and hurt companies' reputations.
Deming taught Statistical Process Control and used control charts to demonstrate variation in
processes and how to determine if a process is in statistical control.
There is a variation in every process. Even with the same inputs, a production process can
produce different results because it contains many sources of variation, for example the
materials may not be always be exactly the same; the tools wear out over time and they are
subjected to vibration heat or cold; or the operators may make mistakes. Variation due to any
of these individual sources appears at random; however, their combined effect is stable and
usually can be predicted statistically. These factors that are present as a natural part of a
process are referred to as common (or system) causes of variation.
Common causes are due to the inherent design and structure of the system. It is
management's responsibility to reduce or eliminate common causes. Special causes are
external to the system, and it is the responsibility of operating personnel to eliminate such
causes. Common causes of variation generally account for about 80 to 90 percent of the
observed variation in a production process. The remaining 10 to 20 percent are the result of
special causes of variation, often called assignable causes. Factors such as bad material from
a supplier, a poorly trained operator or excessive tool wear are examples of special causes. If
no operators are trained, that is system problem, not a special cause. The system has to be
changed.
THEORY OF KNOWLEDGE.
Deming emphasized that knowledge is not possible without theory, and experience alone
does not establish a theory. Experience only describes—it cannot be tested or validated—and
alone is no help for management. Theory, on the other hand, shows a cause-and-effect
relationship that can be used for prediction. There is a lesson here for the widespread
benchmarking practices: copying only an example of success, without understanding it in
theory, may not lead to success, but could lead to disaster.
THEORY OF PSYCHOLOGY.
Deming believed that traditional management practices, such as the Seven Deadly Diseases
listed below, significantly contributed to the American quality crisis.
1. Lack of constancy of purpose to plan and deliver products and services that will help a
company survive in the long term.
2. Emphasis on short-term profits caused by short-term thinking (which is just the opposite of
constancy of purpose), fear of takeovers, worry about quarterly dividends, and other types of
reactive management.
3. Performance appraisals (i.e., annual reviews, merit ratings) that promote fear and stimulate
unnecessary competition among employees.
4. Mobility of management (i.e., job hopping), which promotes short-term thinking.
5. Management by use of visible figures without concern about other data, such as the effect of
happy and unhappy customers on sales, and the increase in overall quality and productivity
that comes from quality improvement upstream.
6. Excessive medical costs, which now have been acknowledged as excessive by federal and
state governments, as well as industries themselves.
7. Excessive costs of liability further increased by lawyers working on contingency fees.
Deming formulated the following Fourteen Points to cure (eliminate) the Seven Deadly
Diseases and help organizations to survive and flourish in the long term:
1. Create constancy of purpose toward improvement of product and service. Develop a plan to
be competitive and stay in business. Everyone in the organization, from top management to
shop floor workers, should learn the new philosophy.
2. Adopt the new philosophy. Commonly accepted levels of delays, mistakes, defective
materials, and defective workmanship are now intolerable. We must prevent mistakes.
3. Cease dependence on mass inspection. Instead, design and build in quality. The purpose of
inspection is not to send the product for rework because it does not add value. Instead of
leaving the problems for someone else down the production line, workers must take
responsibility for their work. Quality has to be designed and built into the product; it cannot be
inspected into it. Inspection should be used as an information-gathering device, not as a
means of "assuring" quality or blaming workers.
4. Don't award business on price tag alone (but also on quality, value, speed and long term
relationship). Minimize total cost. Many companies and organizations award contracts to the
lowest bidder as long as they meet certain requirements. However, low bids do not guarantee
quality; and unless the quality aspect is considered, the effective price per unit that a
company pays its vendors may be understated and, in some cases, unknown. Deming urged
businesses to move toward single-sourcing, to establish long-term relationships with a few
suppliers (one supplier per purchased part, for example) leading to loyalty and opportunities
for mutual improvement. Using multiple suppliers has been long justified for reasons such as
providing protection against strikes or natural disasters or making the suppliers compete
against each other on cost. However, this approach has ignored "hidden" costs such as
increased travel to visit suppliers, loss of volume discounts, increased set-up charges
resulting in higher unit costs, and increased inventory and administrative expenses. Also
constantly changing suppliers solely on the base of price increases the variation in the
material supplied to production, since each supplier's process is different.
5. Continuously improve the system of production and service. Management's job is to
continuously improve the system with input from workers and management. Deming was a
disciple of Walter A. Shewhart, the developer of control charts and the continuous cycle of
process improvement known as the Shewhart cycle. Deming popularized the Shewhart Cycle
as the Plan-Do-Check-Act (PDCA) or Plan-Do-Study-Act (PDSA) cycle; therefore, it is also
often referred to as the Deming cycle. In the planning stage, opportunities for improvement
are recognized and operationally defined. In the doing stage, the theory and course of action
developed in the previous stage is tested on a small scale through conducting trial runs in a
laboratory or prototype setting. The results of the testing phase are analyzed in the
check/study stage using statistical methods. In the action stage, a decision is made regarding
the implementation of the proposed plan. If the results were positive in the pilot stage, then
the plan will be implemented. Otherwise alternative plans are developed. After full scale
implementation, customer and process feedback will again be obtained and the process of
continuous improvement continues.
6. Institute training on the job. When training is an integral part of the system, operators are
better able to prevent defects. Deming understood that employees are the fundamental asset
of every company, and they must know and buy into a company's goals. Training enables
employees to understand their responsibilities in meeting customers' needs.
7. Institute leadership (modern methods of supervision). The best supervisors are leaders and
coaches, not dictators. Deming high-lighted the key role of supervisors who serve as a vital
link between managers and workers. Supervisors first have to be trained in the quality
management before they can communicate management's commitment to quality
improvement and serve as role models and leaders.
8. Drive out fear. Create a fear-free environment where everyone can contribute and work
effectively. There is an economic loss associated with fear in an organization. Employees try
to please their superiors. Also, because they feel that they might lose their jobs, they are
hesitant to ask questions about their jobs, production methods, and process parameters. If a
supervisor or manager gives the impression that asking such questions is a waste of time,
then employees will be more concerned about pleasing their supervisors than meeting long-
term goals of the organization. Therefore, creating an environment of trust is a key task of
management.
9. Break down barriers between areas. People should work cooperatively with mutual trust,
respect, and appreciation for the needs of others in their work. Internal and external
organizational barriers impede the flow of information, prevent entities from perceiving
organizational goals, and foster the pursuit of subunit goals that are not necessarily consistent
with the organizational goals. Barriers between organizational levels and departments are
internal barriers. External barriers are between the company and its suppliers, customers,
investors, and community. Barriers can be eliminated through better communication, cross-
functional teams, and changing attitudes and cultures.
10. Eliminate slogans aimed solely at the work force. Most problems are system-related and
require managerial involvement to rectify or change. Slogans don't help. Deming believed that
people want to do work right the first time. It is the system that 80 to 90 percent of the time
prevents people from doing their work right the first time.
11. Eliminate numerical goals, work standards, and quotas. Objectives set for others can force
sub-optimization or defective output in order to achieve them. Instead, learn the capabilities of
processes and how to improve them. Numerical goals set arbitrarily by management,
especially if they are not accompanied by feasible courses of action, have a demoralizing
effect. Goals should be set in a participative style together with methods for accomplishment.
Deming argued that the quota or work standard system is a short-term solution and that
quotas emphasize quantity over quality. They do not provide data about the process that can
be used to meet the quota, and they fail to distinguish between special and common causes
when seeking improvements to the process.
12. Remove barriers that hinder workers (and hinder pride in workmanship). The direct effect of
pride in workmanship is increased motivation and a greater ability for employees to see
themselves as part of the same team. This pride can be diminished by several factors: (1)
management may be insensitive to workers' problems; (2) they may not communicate the
company's goals to all levels; and (3) they may blame employees for failing to meet company
goals when the real fault lies with the management.
13. Institute a vigorous program of education and self improvement. Deming's philosophy is
based on long-term, continuous process improvement that cannot be carried out without
properly trained and motivated employees. This point addresses the need for ongoing and
continuous education and self-improvement for the entire organization. This educational
investment serves the following objectives: (1) it leads to better motivated employees; (2) it
communicates the company goals to the employees; (3) it keeps the employees up-to-date on
the latest techniques and promotes teamwork; (4) training and retraining provides a
mechanism to ensure adequate performance as the job responsibilities change; and (5)
through increasing job loyalty, it reduces the number of people who "job-hop."
14. Take action to accomplish the transformation. Create a structure in top management that will
promote the previous thirteen points. It is the top management's responsibility to create and
maintain a structure for the dissemination of the concepts outlined in the first thirteen points.
Deming felt that people at all levels in the organization should learn and apply his Fourteen
Points if statistical process control is to be a successful approach to process improvement
and if organizations are to be transformed. However, he encouraged top management to
learn them first. He believed that these points represent an all-or-nothing commitment and
that they cannot be implemented selectively.
Known as the Deming Plan-Do-Check-Act (PDCA) Cycle, this concept was invented by
Shewhart and popularized by Deming. This approach is a cyclic process for planning and
testing improvement activities prior to full-scale implementation and/or prior to formalizing
the improvement. When an improvement idea is identified, it is often wise to test it on a small
scale prior to full implementation to validate its benefit. Additionally, by introducing a
change on a small scale, employees have time to accept it and are more likely to support it.
The Deming PDCA Cycle provides opportunities for continuous evaluation and
improvement.
The steps in the Deming PDCA or PDSA Cycle as shown in Figure 1 are as follows:
Deming was trained as a mathematical physicist, and he utilized mathematical concepts and
tools (Statistical Process Control) to reduce variation and prevent defects. However, one of
his greatest contributions might have been in recognizing the importance of organizational
culture and employee attitudes in creating a successful organization. In many ways, his
philosophies paralleled the development of the resource-based view of organizations that
emphasized that employee knowledge and skills and organizational culture are very difficult
to imitate or replicate, and they can serve as a basis of sustainable competitive advantage.
Dr. Juran was born on December 24, 1904 in Braila, Romania. He moved to the United States
in 1912 at the age of 8. Juran's teaching and consulting career spanned more than seventy
years, known as one of the foremost experts on quality in the world.
A quality professional from the beginning of his career, Juran joined the inspection branch of
the Hawthorne Co. of Western Electric (a Bell manufacturing company) in 1924, after
completing his B.S. in Electrical Engineering. In 1934, he became a quality manager. He
worked with the U. S. government during World War II and afterward became a quality
consultant. In 1952, Dr. Juran was invited to Japan. Dr. Edward Deming helped arrange the
meeting that led to this invitation and his many years of work with Japanese companies.
Juran founded the Juran Center for Quality Improvement at the University of Minnesota and
the Juran Institute. His third book, Juran's Quality Control Handbook, published in 1951,
was translated into Japanese. Other books include Juran on Planning for Quality (1988),
Juran on Leadership for Quality (1989), Juran on Quality by Design (1992), Quality
Planning and Analysis (1993), and A History of Managing for Quality (1995). Architect of
Quality (2004) is his autobiography.
SELECTED JURAN QUALITY THEORIES
Juran's concepts can be used to establish a traditional quality system, as well as to support
Strategic Quality Management. Among other things, Juran's philosophy includes the Quality
Trilogy and the Quality Planning Roadmap.
The Quality Trilogy emphasizes the roles of quality planning, quality control, and quality
improvement. Quality planning's purpose is to provide operators with the ability to produce
goods and services that can meet customers' needs. In the quality planning stage, an
organization must determine who the customers are and what they need, develop the product
or service features that meet customers' needs, develop processes which are able to deliver
those products and services, and transfer the plans to the operating forces. If quality planning
is deficient, then chronic waste occurs.
Quality control is used to prevent things from getting worse. Quality control is the inspection
part of the Quality Trilogy where operators compare actual performance with plans and
resolve the differences. Chronic waste should be considered an opportunity for quality
improvement, the third element of the Trilogy. Quality improvement encompasses
improvement of fitness-for-use and error reduction, seeks a new level of performance that is
superior to any previous level, and is attained by applying breakthrough thinking.
While up-front quality planning is what organizations should be doing, it is normal for
organizations to focus their first quality efforts on quality control. In this aspect of the Quality
Trilogy, activities include inspection to determine percent defective (or first pass yield) and
deviations from quality standards. Activities can then focus on another part of the trilogy,
quality improvement, and make it an integral part of daily work for individuals and teams.
Quality planning must be integrated into every aspect of the organization's work, such as
strategic plans; product, service and process designs; operations; and delivery to the
customer. The Quality Trilogy is depicted below in Figure.
Breakthrough Pareto
Analysis
Quality Improvement
JURAN'S QUALITY PLANNING ROAD MAP.
Juran's Quality Planning Road Map can be used by individuals and teams throughout the world as a
checklist for understanding customer requirements, establishing measurements based on customer
needs, optimizing product design, and developing a process that is capable of meeting customer
requirements. The Quality Planning Roadmap is used for Product and Process Development and is
shown in Figure 3.
Juran's Quality Trilogy and Quality Roadmap are not enough. An infrastructure for Quality
must be developed, and teams must work on improvement projects. The infrastructure should
include a quality steering team with top management leading the effort, quality should
become an integral part of the strategic plan, and all people should be involved. As people
identify areas with improvement potential, they should team together to improve processes
and produce quality products and services.
Under the "Big Q" concept, all people and departments are responsible for quality. In the old
era under the concept of "little q," the quality department was responsible for quality. Big "Q"
allows workers to regain pride in workmanship by assuming responsibility for quality. Juran
believed quality is associated with customer satisfaction and dissatisfaction with the product
and emphasized the necessity for ongoing quality improvement through a succession of small
improvement
Projects carried out throughout the organization. His ten steps to quality Improvement are:
He concentrated not just on the end customer, but on other external and internal customers.
Each person along the chain, from product designer to final user is a supplier and a customer.
In addition, the person will be a process, carrying out some transformation or activity
PHILIP CROSBY (1926–2001)
Philip Bayard Crosby was born in Wheeling, West Virginia, in 1926. After Crosby graduated
from high school, he joined the Navy and became a hospital corpsman. In 1946 Crosby
entered the Ohio College of Podiatric Medicine in Cleveland. After graduation he returned to
Wheeling and practiced podiatry with his father. He was recalled to military service during
the Korean conflict, this time he served as a Marine Medical Corpsman.
In 1952 Crosby went to work for the Crosley Corp. in Richmond, Indiana, as a junior
electronic test technician. He joined the American Society for Quality, where his early
concepts concerning Quality began to form. In 1955, he went to work for Bendix Corp. as a
reliability technician and quality engineer. He investigated defects found by the test people
and inspectors.
In 1957 he became a senior quality engineer with Martin Marietta Co. in Orlando, Florida.
During his eight years with Martin Marietta, Crosby developed his "Zero Defects" concepts,
began writing articles for various journals, and started his speaking career.
In 1965 International Telephone and Telegraph (ITT) hired Crosby as vice president in
charge of corporate quality. During his fourteen years with ITT, Crosby worked with many of
the world's largest industrial and service companies, implementing his pragmatic
management philosophy, and found that it worked.
After a number of years in industry, Crosby established the Crosby Quality College in Winter
Park, Florida. He is well known as an author and consultant and has written many articles and
books. He is probably best known for his book Quality is Free (1979) and concepts such as
his Absolutes of Quality Management, Zero Defects, Quality Management Maturity Grid, 14
Quality Improvement Steps, Cost of Quality, and Cost of Nonconformance. Other books he
has written include Quality Without Tears (1984) and Completeness (1994).
In his book Quality Is Free, Crosby makes the point that it costs money to achieve quality,
but it costs more money when quality is not achieved. When an organization designs and
builds an item right the first time (or provides a service without errors), quality is free. It does
not cost anything above what would have already been spent. When an organization has to
rework or scrap an item because of poor quality, it costs more. Crosby discusses Cost of
Quality and Cost of Nonconformance or Cost of Nonquality. The intention is spend more
money on preventing defects and less on inspection and rework.
Crosby espoused his basic theories about quality in four Absolutes of Quality Management as
follows:
To support his Four Absolutes of Quality Management, Crosby developed the Quality
Management Maturity Grid and Fourteen Steps of Quality Improvement. Crosby sees the
Quality Management Maturity Grid as a first step in moving an organization towards quality
management. After a company has located its position on the grid, it implements a quality
improvement system based on Crosby's Fourteen Steps of Quality Improvement as shown in
Figure 4.
Crosby's Absolutes of Quality Management are further delineated in his Fourteen Steps of
Quality Improvement as shown below:
ARMAND V. FEIGENBAUM
Feigenbaum was still a doctoral student at the Massachusetts Institute of Technology when he
completed the first edition of Total Quality Control (1951). An engineer at General Electric
during World War II, Feigenbaum used statistical techniques to determine what was wrong
with early jet airplane engines. For ten years he served as manager of worldwide
manufacturing operations and quality control at GE. Feigenbaum serves as president of
General Systems Company, Inc., Pittsfield, Massachusetts, an international engineering firm
that designs and installs integrated operational systems for major corporations in the United
States and abroad.
Feigenbaum was the founding chairman of the International Academy for Quality and is a
past president of the American Society for Quality Control, which presented him its Edwards
Medal and Lancaster Award for his contributions to quality and productivity. His Total
Quality Control concepts have had a very positive impact on quality and productivity for
many organizations throughout the industrialized world.
An author and consultant in the area of process improvement, Harrington spent forty years
with IBM. His career included serving as Senior Engineer and Project Manager of Quality
Assurance for IBM, San Jose, California. He was President of Harrington, Hurd and Reicker,
a well-known performance improvement consulting firm until Ernst & Young bought the
organization. He is the international quality advisor for Ernst and Young and on the board of
directors of various national and international companies.
Harrington served as president and chairman of the American Society for Quality and the
International Academy for Quality. In addition, he has been elected as an honorary member
of six quality associations outside of North America and was selected for the Singapore Hall
of Fame. His books include The Improvement Process, Business Process Improvement, Total
Improvement Management, ISO 9000 and Beyond, Area Activity Analysis, The Creativity
Toolkit, Statistical Analysis Simplified, The Quality/Profit Connection, and High
Performance Benchmarking.
Ishikawa's book, Guide to Quality Control (1982), is considered a classic because of its in-
depth explanations of quality tools and related statistics. The tool for which he is best known
is the cause and effect diagram. Ishikawa is considered the Father of the Quality Circle
Movement. Letters of praise from representatives of companies for which he was a consultant
were published in his book What Is Total Quality Control? (1985). Those companies include
IBM, Ford, Bridgestone, Komatsu Manufacturing, and Cummins Engine Co.
A statistician who worked at Western Electric, Bell Laboratories, Dr. Walter A. Shewhart
used statistics to explain process variability. It was Dr. W. Edward Deming who publicized
the usefulness of control charts, as well as the Shewhart Cycle. However, Deming rightfully
credited Shewhart with the development of theories of process control as well as the
Shewhart transformation process on which the Deming PDCA (Plan-Do-Check or Study-Act)
Cycle is based. Shewhart's theories were first published in his book Economic Control of
Quality of Manufactured Product (1931).
One of the world's leading experts on improving the manufacturing process, Shigeo Shingo
created, with Taiichi Ohno, many of the features of just-in-time (JIT) manufacturing
methods, systems, and processes, which constitute the Toyota Production System. He has
written many books including A Study of the Toyota Production System From An Industrial
Engineering Viewpoint (1989), Revolution in Manufacturing: The SMED (Single Minute
Exchange of Die) System (1985), and Zero Quality Control: Source Inspection and the Poka
Yoke System (1986).
Shingo's greatness seems to be based on his ability to understand exactly why products are
manufactured the way they are, and then transform that understanding into a workable system
for low-cost, high quality production. Established in 1988, the Shingo Prize is the premier
manufacturing award in the United States, Canada, and Mexico. In partnership with the
National Association of Manufacturers, Utah State University administers the Shingo Prize
for Excellence in Manufacturing, which promotes world class manufacturing and recognizes
companies that excel in productivity and process improvement, quality enhancement, and
customer satisfaction.
Rather than focusing on theory, Shingo focused on practical concepts that made an immediate
difference. Specific concepts attributed to Shingo are:
• Poka Yoke requires stopping processes as soon as a defect occurs, identifying the source of
the defect, and preventing it from happening again.
• Mistake Proofing is a component of Poka Yoke. Literally, this means making it impossible to
make mistakes (i.e., preventing errors at the source).
• SMED (Single Minute Exchange of Die) is a system for quick changeovers between products.
The intent is to simplify materials, machinery, processes and skills in order to dramatically
reduce changeover times from hours to minutes. As a result products could be produced in
small batches or even single units with minimal disruption.
• Just-in-Time (JIT) Production is about supplying customers with what they want when they
want it. The aim of JIT is to minimize inventories by producing only what is required when it is
required. Orders are "pulled" through the system when triggered by customer orders, not
pushed through the system in order to achieve economies of scale with the production of
larger batches.
While quality experts would agree that Taylor's concepts increase productivity, some argue
that his concepts are focused on productivity, not process improvement and as a result could
cause less emphasis on quality. Dr. Joseph Juran said that Taylor's concepts made the United
States the world leader in productivity. However, the Taylor system required separation of
planning work from executing the work. This separation was based on the idea that engineers
should do the planning because supervisors and workers were not educated. Today, the
emphasis is on transferring planning to the people doing the work.
Dr. Genichi Taguchi was a Japanese engineer and statistician who defined what product
specification means and how this can be translated into cost effective production. He worked
in the Japanese Ministry of Public Health and Welfare, Institute of Statistical Mathematics,
Ministry of Education. He also worked with the Electrical Communications Laboratory of the
Nippon Telephone and Telegraph Co. to increase the productivity of the R&D activities.
In the mid 1950s Taguchi was Indian Statistical Institute visiting professor, where he met
Walter Shewhart. He was a Visiting Research Associate at Princeton University in 1962, the
same year he received his Ph.D. from Kyushu University. He was a Professor at Tokyo's
Aoyama Gakuin University and Director of the Japanese Academy of Quality.
Taguchi was awarded the Deming Application prize (1960), Deming awards for literature on
quality (1951, 1953, and 1984), Willard F. Rockwell Medal by the International Technologies
Institute (1986).
Taguchi's contributions are in robust design in the area of product development. The Taguchi
Loss Function, The Taguchi Method (Design of Experiments), and other methodologies have
made major contributions in the reduction of variation and greatly improved engineering
quality and productivity. By consciously considering the noise factors (environmental
variation during the product's usage, manufacturing variation, and component deterioration)
and the cost of failure in the field, Taguchi methodologies help ensure customer satisfaction.
Robust Design focuses on improving the fundamental function of the product or process, thus
facilitating flexible designs and concurrent engineering. Taguchi product development
includes three stages: (1) system design (the non-statistical stage for engineering, marketing,
customer and other knowledge); (2) parameter stage (determining how the product should
perform against defined parameters; and (3) tolerance design (finding the balance between
manufacturing cost and loss).
2. Describe
(a) Crosby’s Four absolutes of Quality
According to him, Quality must encompasses all the phases in the manufacuring
of a product. This includes design, manufacturing, Quality checks, sales, after
sales services and customer satisfaction when the product is deliver to the
customer.
Efficiency
Focusin
g
Get more information from
Collect only the information that is really needed
fewerexperiments
Experimental design (commonly referred to as DOE) is a useful complement to multivariate data analysis
because it generates “structured” data tables, i.e. data tables that contain an important amount of structured
variation. This underlying structure will then be used as a basis for multivariate modeling, which will guarantee
stable and robust models.
More generally, careful sample selection increases the chances of extracting useful information from the data.
When one has the possibility to actively perturb the system (experiment with the variables), these chances
become even greater. The critical part is to decide which variables to change, the intervals for this variation, and
the pattern of the experimental points.
Experimental design is a strategy to gather empirical knowledge, i.e. knowledge based on the analysis of
experimental data and not on theoretical models. It can be applied when investigating a phenomenon in order to
gain understanding or improve performance.
Building a design means carefully choosing a small number of experiments that are to be performed under
controlled conditions. There are four interrelated steps in building a design: : e.g. "better understand" or "sort out
important variables" or "find the optimum conditions"
a. Define the objective of the investigation: e.g. “better understand” or “sort out
important variables” or “find the optimum conditions”
b. Define the variables that will be controlled during the experiment (design
variables), and their levels or ranges of variation.
c. Define the variables that will be measured to describe the outcome of the
experimental runs (response variables), and examine their precision
d. Choose among the available standard designs the one that is compatible with
the objective, number of design variables and precision of measurements, and has
a reasonable cost
• Most of the standard experimental designs can be generated in The Unscrambler® X once the
experimental objective, the number (and nature) of the design variables, the nature of the responses
and the economical number of experimental runs have been defined. Generating such a design will
provide the user with the list of all experiments to be performed in order to gather the required
information to meet the objectives.
•
• The figure above shows the Box-Behnken design drawn in two different ways. In the left drawing one
can see how it is built, while the drawing to the right shows how the design is rotatable.
'One change at a time' testing always carries the risk that the experimenter may find one input
variable to have a significant effect on the response (output) while failing to discover that changing
another variable may alter the effect of the first (i.e. some kind of dependency or interaction). This
is because the temptation is to stop the test when this first significant effect has been found. In
order to reveal an interaction or dependency, 'one change at a time' testing relies on the
experimenter carrying the tests in the appropriate direction. However, DOE plans for all possible
dependencies in the first place, and then prescribes exactly what data are needed to assess them
i.e. whether input variables change the response on their own, when combined, or not at all. In
terms of resource the exact length and size of the experiment are set by the design (i.e. before
testing begins).
DOE can be used to find answers in situations such as "what is the main contributing factor to a
problem?", "how well does the system/process perform in the presence of noise?", "what is the
best configuration of factor values to minimize variation in a response?" etc. In general, these
questions are given labels as particular types of study. In the examples given above, these are
problem solving, parameter design and robustness study. In each case, DOE is used to find the
answer, the only thing that marks them different is which factors would be used in the experiment.
11. What is meant by “5S” principle with respect to “Quality Management”? What are its
benefits? Give some examples of Companies which have implemented “5S” and benefited
from it.
12. (a) Explain the core concepts of “Business Excellence”.
(b) Suppose you are running a medium sized Business. What actions will you take to achieve
“Business Excellence” in your Organization?
Quality control is a process employed to ensure a certain level of quality in a product or service. It may include
whatever actions a business deems necessary to provide for the control and verification of certain characteristics of a
product or service. The basic goal of quality control is to ensure that the products, services, or processes provided
meet specific requirements and are dependable, satisfactory, and fiscally sound.
Essentially, quality control involves the examination of a product, service, or process for certain minimum levels of
quality. The goal of a quality control team is to identify products or services that do not meet a company’s specified
standards of quality. If a problem is identified, the job of a quality control team or professional may involve stopping
production temporarily. Depending on the particular service or product, as well as the type of problem identified,
production or implementation
In order to implement an effective QC program, an enterprise must first decide which specific
standards the product or service must meet. Then the extent of QC actions must be
determined (for example, the percentage of units to be tested from each lot). Next, real-world
data must be collected (for example, the percentage of units that fail) and the results reported
to management personnel. After this, corrective action must be decided upon and taken (for
example, defective units must be repaired or rejected and poor service repeated at no charge
until the customer is satisfied). If too many unit failures or instances of poor service occur, a
plan must be devised to improve the production or service process and then that plan must be
put into action. Finally, the QC process must be ongoing to ensure that remedial efforts, if
required, have produced satisfactory results and to immediately detect recurrences or new
instances of trouble.
Add the outputs, and then the customers. It may help to focus on a few significant customers, ranking the
characteristics – quality, timeliness and cost – they value most about the outputs they receive.
Finally, define the necessary inputs, and the suppliers. Again, identify the most important characteristics of
these inputs.
Mathematically, we consider only a one-dimensional version of the funnel experiment. The target value for the
marble is taken as 0. Let [y.sub.i] be the random variable defining the distance from the target to where the
marble came to rest at the ith drop. Deming's four "rules of the funnel" are:
Rule 1. Leave the funnel fixed, aimed at the target, with no adjustment.
Rule 2. At drop i (i = 1,2,3, . . .) the marble will come to rest at point [y.sub.i], measured from the target. (In
other words, [y.sub.i] is the error at drop i.) Move the funnel the distance -[y.sub.i] from its last position to aim
for the next drop.
Rule 3. At drop i the marble comes to rest at point [y.sub.i] from the target, then for the next drop …
A continuous probability distribution is defined over an infinite number of points (such as all
values between 1 and 3, inclusive).
Normal Distribution
Normal distributions are a family of distributions that have the shape
shown below.
Normal distributions are symmetric with scores more concentrated in the
middle than in the tails. They are defined by two parameters: the mean
(μ) and the standard deviation (σ). Many kinds of behavioral data are
approximated well by the normal distribution. Many statistical tests
assume a normal distribution. Most of these tests work well even if the
distribution is only approximately normal and in many cases as long as it
does not deviate greatly from normality.
The formula for the height (y) of a normal distribution for a given value of
x is:
As with any continuous probability function, the area under the curve must equal 1, and the
area between two values of X (say, a and b) represents the probability that X lies between a
and b as illustrated on Figure 1. Further, since the normal is a symmetric distribution, it has
the nice property that a known percentage of all possible values of X lie within ± a certain
number of standard deviations of the mean, as illustrated by Figure 2. For example, 68.27%
of the values of any normally distributed variable lie within the interval (µ - 1s, µ + 1s).
The resulting values, called Z-values, are the values of a new variable called the standard
normal variate, Z. The translation process is depicted in Figure
Try our control chart calculator for attributes (discrete data) and control chart
calculator for variables (continuous data).
The success of Shewhart's approach is based on the idea that no matter how well the
process is designed, there exists a certain amount of nature variability in output
measurements.
When the variation in process quality is due to random causes alone, the process is
said to be in-control. If the process variation includes both random and special causes
of variation, the process is said to be out-of-control.
The control chart is supposed to detect the presence of special causes of variation.
In its basic form, the control chart is a plot of some function of process
measurements against time. The points that are plotted on the graph are compared to
a pair of control limits. A point that exceeds the control limits signals an alarm.
An alarm signaled by a control chart may indicate that special causes of variation are
present, and some action should be taken, ranging from taking a re-check sample to
the stopping of a production line in order to trace and eliminate these causes. On the
other hand, an alarm may be a false one, when in practice no change has occurred in
the process. The design of control charts is a compromise between the risks of not
detecting real changes and of false alarms.
1. The measurement-function (e.g. the mean), that is used to monitor the process
parameter, is distributed according to a normal distribution. In practice, if your data
seem very far from meeting this assumption, try to transform them.
2. Measurements are independent of each other.
Comparison of Control charts are used to routinely monitor quality. Depending on the
univariate and number of process characteristics to be monitored, there are two basic
multivariate control types of control charts. The first, referred to as a univariate control chart,
data is a graphical display (chart) of one quality characteristic. The second,
referred to as a multivariate control chart, is a graphical display of a
statistic that summarizes or represents more than one quality
characteristic.
Characteristics of If a single quality characteristic has been measured or computed from a
control charts sample, the control chart shows the value of the quality characteristic
versus the sample number or versus time. In general, the chart contains
a center line that represents the mean value for the in-control process.
Two other horizontal lines, called the upper control limit (UCL) and the
lower control limit (LCL), are also shown on the chart. These control
limits are chosen so that almost all of the data points will fall within these
limits as long as the process remains in-control. The figure below
illustrates this.
Chart
demonstrating
basis of control
chart
Why control charts The control limits as pictured in the graph might be .001 probability
"work" limits. If so, and if chance causes alone were present, the probability of a
point falling above the upper limit would be one out of a thousand, and
similarly, a point falling below the lower limit would be one out of a
thousand. We would be searching for an assignable cause if a point
would fall outside these limits. Where we put these limits will determine
the risk of undertaking such a search when in reality there is no
assignable cause for variation.
Since two out of a thousand is a very small risk, the 0.001 limits
may be said to give practical assurances that, if a point falls
outside these limits, the variation was caused be an assignable
cause. It must be noted that two out of one thousand is a purely
arbitrary number. There is no reason why it could not have been
set to one out a hundred or even larger. The decision would
depend on the amount of risk the management of the quality
control program is willing to take. In general (in the world of
quality control) it is customary to use limits that approximate the
0.002 standard.
Does this mean that when all points fall within the limits, the
process is in control? Not necessarily. If the plot looks non-
random, that is, if the points exhibit some form of systematic
behavior, there is still something wrong. For example, if the first
25 of 30 points fall above the center line and the last 5 fall below
the center line, we would wish to know why this is so. Statistical
methods to detect sequences or nonrandom patterns can be applied
to the interpretation of control charts. To be sure, "in control"
implies that all points are between the control limits and they form
a random pattern.
Quality itself has been defined as fundamentally relational: 'Quality is the ongoing process of
building and sustaining relationships by assessing, anticipating, and fulfilling stated and
implied needs.' "Even those quality definitions which are not expressly relational have an
implicit relational character. Why do we try to do the right thing right, on time, every time?
To build and sustain relationships. Why do we seek zero defects and conformance to
requirements (or their modern counterpart, six sigma)? To build and sustain relationships.
Why do we seek to structure features or characteristics of a product or service that bear on
their ability to satisfy stated and implied needs? (ANSI/ASQC.) To build and sustain
relationships. The focus of continuous improvement is, likewise, the building and sustaining
of relationships. It would be difficult to find a realistic definition of quality that did not have,
implicit within the definition, a fundamental express or implied focus of building and
sustaining relationships."
Quality is the customers' perception of the value of the suppliers' work output.
Error-free, value-added care and service that meets and/or exceeds both
the needs and legitimate expectations of those served as well as those
within the Medical Center
Quality is a momentary perception that occurs when something in our environment interacts
with us, in the pre-intellectual awareness that comes before rational thought takes over and
begins establishing order. Judgment of the resulting order is then reported as good or bad
quality value.
Quality of design
The ability to live up to the "quality of design" is maintained by the "quality of the process"
All your actions aimed at the translation, transformation and realization of customer
expectations , converting them to requirements, both qualitatively and quantitatively and
measuring your process performance during and after the realization of these expectations
and requirements .
Quality is doing the right things right and is uniquely defined by each individual.
A product or process that is Reliable, and that performs its intended function is said to be a
quality product.
The degree to which something meets or exceeds the expectations of its consumers.
where to be valid, the requirements must be proven (in advance by management) to:
1) be achievable in operation
making this a universal, operational and easy-to-use definition for the quality for all outputs
from any work activity or process.
performance
features
reliability
conformance
durability
serviceability
aesthetics
perceived quality
Statistical process control (SPC) procedures can help you monitor process behavior.
Arguably the most successful SPC tool is the control chart, originally developed by Walter Shewhart in
the early 1920s. A control chart helps you record data and lets you see when an unusual event, e.g., a
very high or low observation compared with “typical” process performance, occurs.
• Common cause variation, which is intrinsic to the process and will always be present.
• Special cause variation, which stems from external sources and indicates that the process is out
of statistical control.
Various tests can help determine when an out-of-control event has occurred. However, as more tests
are employed, the probability of a false alarm also increases.
A primary tool used for SPC is the control chart, a graphical representation of certain
descriptive statistics for specific quantitative measurements of the manufacturing
process. These descriptive statistics are displayed in the control chart in comparison
to their "in-control" sampling distributions. The comparison detects any unusual
variation in the manufacturing process, which could indicate a problem with the
process. Several different descriptive statistics can be used in control charts and
there are several different types of control charts that can test for different causes,
such as how quickly major vs. minor shifts in process means are detected. Control
charts are also used with product measurements to analyze process capability and
for continuous process improvement efforts.
Typical charts and analyses used to monitor and improve manufacturing process
consistency and capability (produced with Minitab statistical software).
Benefits:
Capabilities:
Fishbone Diagram
Variations: cause enumeration diagram, process fishbone, time-delay fishbone, CEDAC (cause-and-
effect diagram with the addition of cards), desired-result fishbone, reverse fishbone diagram
Description
The fishbone diagram identifies many possible causes for an effect or problem. It can be used to
structure a brainstorming session. It immediately sorts ideas into useful categories.
1. Agree on a problem statement (effect). Write it at the center right of the flipchart or
whiteboard. Draw a box around it and draw a horizontal arrow running to it.
2. Brainstorm the major categories of causes of the problem. If this is difficult use generic
headings:
o Methods
o Machines (equipment)
o People (manpower)
o Materials
o Measurement
o Environment
3. Write the categories of causes as branches from the main arrow.
4. Brainstorm all the possible causes of the problem. Ask: “Why does this happen?” As each idea is
given, the facilitator writes it as a branch from the appropriate category. Causes can be written
in several places if they relate to several categories.
5. Again ask “why does this happen?” about each cause. Write sub-causes branching off the
causes. Continue to ask “Why?” and generate deeper levels of causes. Layers of branches
indicate causal relationships.
6. When the group runs out of ideas, focus attention to places on the chart where ideas are few.
This fishbone diagram was drawn by a manufacturing team to try to understand the source of periodic
iron contamination. The team used the six generic headings to prompt ideas. Layers of branches show
thorough thinking about the causes of the problem.
Fishbone Diagram Example
For example, under the heading “Machines,” the idea “materials of construction” shows four kinds of
equipment and then several specific machine numbers.
Note that some ideas appear in two different places. “Calibration” shows up under “Methods” as a factor
in the analytical procedure, and also under “Measurement” as a cause of lab error. “Iron tools” can be
considered a “Methods” problem when taking samples or a “Manpower” problem with maintenance
personnel.
Example
The managing director of a weighing machine company received a number of irate letters,
complaining of slow service and a constantly engaged telephone. Rather surprised, he asked
his support and marketing managers to look into it. With two other people, they first
defined the key symptom as 'lack of responsiveness to customers' and then met to
brainstorm possible causes, using a Cause-Effect Diagram, as illustrated.
They used the 'Four Ms' (Manpower, Methods, Machines and Materials) as primary cause
areas, and then added secondary cause areas before adding actual causes, thus helping to
ensure that all possible causes were considered. Causes common to several areas were
flagged with capital letters, and key causes to verify and address were circled.
On further investigation, they found that service visits were not well organized; engineers
just picked up a pile of calls and did them in order. They consequently set up regions by
engineer and sorted calls; this significantly reduced traveling time and increased service
turnaround time. They also improved the telephone system and recommended a review of
suppliers' quality procedures.
Fig. 1. Example Cause-Effect Diagram
Other examples
15. What is meant by “Process variation”? Explain the causes of variation in a process?
Conformance to customer CTQs can be measured in process variation and is important in the Six Sigma
methodology, because the customer is always evaluating our services, products and processes to determine how
well they are meeting their needs.
1. Conformance
2. Performance
3. Features
4. Reliability
5. Durability
6. Serviceability
7. Aesthetics, and
8. Perceived Quality
Each dimension can be explicitly defined and is self-exclusive from the other dimensions of quality. A customer
may rate your service or product high in conformance, but low in reliability. Or they may view two dimensions to
work in conjunction with eachother, such as durability and reliability.
This article will discuss the dimension of conformance and how process variation should be interpretted. Process
variation is important in the Six Sigma methodology, because the customer is always evaluating our services,
products and processes to determine how well they are meeting their CTQs; in other words, how well they
conform to the standards.
Understanding Conformance
Conformance can simply be defined as the degree to which your service or product meets the CTQs and
predefined standards. For the purpose of this article, it should be noted that your organization's services and
products are a funtion of your internal processes, as well as your supplier's processes. (We know that everything
in business is a process, right?)
1. You manufacture tires and the tread depth needs to be 5/8 inch plus or minus 0.05 inch.
2. You approve loans and you promise a response to the customer within 24 business hours of receipt.
3. You write code and your manager expects less than 5 bugs found over the life of the product per
4. You process invoices for healthcare services and your customers expect zero errors on their bills.
A simple way to teach the concept of how well your service or product conforms to the CTQs is with a picture of a
target. A target, like those used in archery or shooting, has a series of concentric circles that alternate color. In
the center of the target is the bullseye. When services or products are developed by your organization, the
bullseye is defined by CTQs, the parts are defined by dimensional standards, and the materials are defined by
purity requirements. As we see from the four examples above, the conformance CTQs usually involve a target
dimension (the exact center of the target), as well as a permissible range of variation (center yellow area).
Figure 1: Targetting Process Variation
In Figure 1, three pictures help explain the variation in a process. The picture on the left displays a process that
covers the entire target. While all the bullets appear to have hit the target, very few are in the bullseye. This is an
example of a process that is centered around the target, but very seldomly meets the CTQs of the customer.
The middle picture in Figure 1 displays a process that is well grouped on the target (all the bullets hit the target in
close proximity to eachother), but is well off target. In this picture -- like in the first picture -- almost every service
or product produced fails to meet the customer CTQs.
The far right picture in Figure 1 displays a process that is well grouped on the target, and all the bullets are within
the bullseye. This case displays a process that is centered and is within the tolerance of the customer CTQs.
Because this definition of conformance defines "good quality" with all of the bullets landing within the bullseye
tolerance band, there is little interest in whether the bullets are exactly centered. For the most part, variation (or
dispersion) within the CTQ specification limits is not an issue for the customer.
One can easily see the direct relationship of Figure 2 to Figure 1. In Figure 2, the far left picture displays wide
variation that is centered on the target. The middle picture shows little variation, but off target. And the far right
picture displays little variation centered on the target. Shaded areas falling between the specification limits
indicate process output dimensions meeting specifications; shaded areas falling either to the left of the lower
specification limit or to the right of the upper specification limit indicate items falling outside specification limits.
Conclusion
Improvements in meeting customer CTQs and specification limits are objective measures of quality that translate
directly into quality gains, because transactional processing errors, late deliveries and product defects are
regarded as undesirable by all customers.
Shewhart (1931, 1980) defined control as follows:
The critical point in this definition is that control is not defined as the
complete absence of variation. Control is simply a state where all
variation is predictable variation. A controlled process isn't necessarily a
sign of good management, nor is an out-of-control process necessarily
producing non-conforming product.
Figure IV.5 illustrates the need for statistical methods to determine the
category of variation.
16. Explain with an example the “Testing of Hypothesis”. What do you mean by “Type I” and
“Type II” error with respect to “Hypothesis Testing”?
17. (a) Explain in brief the different Control charts for Attributes.
(b) The following figure gives number of defectives in 20 samples containing 2000 items.
425, 430, 216, 341, 225, 322, 280, 306, 337, 305, 356, 402, 216, 264, 126, 409, 193,
280, 389, 326
Calculate the values for central line and control limits for P chart.
18. Explain “Cp” and “Cpk”Index. Differentiate between Process Stability and Process
Capability.
Six Sigma process performance is reported in terms of Sigma. But the statistical measurements of Cp, Cpk, Pp,
and Ppk may provide more insight into the process. Learn the definitions, interpretations and calculations for Cp,
Cpk, Pp and Ppk.
In the Six Sigma quality methodology, process performance is reported to the organization as a sigma level. The
higher the sigma level, the better the process is performing.
Another way to report process capability and process performance is through the statistical measurements of Cp,
Cpk, Pp, and Ppk. This article will present definitions, interpretations and calculations for Cpk and Ppk though the
use of forum quotations. Thanks to everyone below that helped contributed to this excellent reference.
• Definitions
• Interpreting Cp, Cpk
• Interpreting Pp, Ppk
• Differences Between Cpk and Ppk
• Calculating Cpk and Ppk
Definitions
Cp = Process Capability. A simple and straightforward indicator of process capability.
Cpk = Process Capability Index. Adjustment of Cp for the effect of non-centered distribution.
Pp = Process Performance. A simple and straightforward indicator of process performance.
Ppk = Process Performance Index. Adjustment of Pp for the effect of non-centered distribution.
"If you hunt our shoot targets with bow, darts, or gun try this analogy. If your shots are falling in the same spot
forming a good group this is a high cP, and when the sighting is adjusted so this tight group of shots is landing on
the bullseye, you now have a high cpK." Tommy
"Cpk measures how close you are to your target and how consistent you are to around your average
performance. A person may be performing with minimum variation, but he can be away from his target towards
one of the specification limit, which indicates lower Cpk, whereas Cp will be high. On the other hand, a person
may be on average exactly at the target, but the variation in performance is high (but still lower than the tolerance
band (i.e. specification interval). In such case also Cpk will be lower, but Cp will be high. Cpk will be higher only
when you r meeting the target consistently with minimum variation." Ajit
"You must have a Cpk of 1.33 [4 sigma] or higher to satisfy most customers." Joe Perito
"Consider a car and a garage. The garage defines the specification limits; the car defines the output of the
process. If the car is only a little bit smaller than the garage, you had better park it right in the middle of the
garage (center of the specification) if you want to get all of the car in the garage. If the car is wider than the
garage, it does not matter if you have it centered; it will not fit. If the car is a lot smaller than the garage (six sigma
process), it doesn't matter if you park it exactly in the middle; it will fit and you have plenty of room on either side.
If you have a process that is in control and with little variation, you should be able to park the car easily within the
garage and thus meet customer requirements. Cpk tells you the relationship between the size of the car, the size
of the garage and how far away from the middle of the garage you parked the car." Ben
"The value itself can be thought of as the amount the process (car) can widen before hitting the nearest spec limit
(garage door edge).
Cpk=1/2 means you've crunched nearest the door edge (ouch!)
Cpk=1 means you're just touching the nearest edge
Cpk=2 means your width can grow 2 times before touching
Cpk=3 means your width can grow 3 times before touching" Larry Seibel
Interpreting Pp, Ppk
"Process Performance Index basically tries to verify if the sample that you have generated from the process is
capable to meet Customer CTQs (requirements). It differs from Process Capability in that Process Performance
only applies to a specific batch of material. Samples from the batch may need to be quite large to be
representative of the variation in the batch. Process Performance is only used when process control cannot be
evaluated. An example of this is for a short pre-production run. Process Performance generally uses sample
sigma in its calculation; Process capability uses the process sigma value determined from either the Moving
Range, Range, or Sigma control charts." Praneet
"Ppk produces an index number (like 1.33) for the process variation. Cpk references the variation to your
specification limits. If you just want to know how much variation the process exhibits, a Ppk measurement is fine.
If you want to know how that variation will affect the ability of your process to meet customer requirements
(CTQ's), you should use Cpk." Michael Whaley
"It could be argued that the use of Ppk and Cpk (with sufficient sample size) are far more valid estimates of long
and short term capability of processes since the 1.5 sigma shift has a shaky statistical foundation." Eoin
"Cpk tells you what the process is CAPABLE of doing in future, assuming it remains in a state of statistical
control. Ppk tells you how the process has performed in the past. You cannot use it predict the future, like with
Cpk, because the process is not in a state of control. The values for Cpk and Ppk will converge to almost the
same value when the process is in statistical control. that is because Sigma and the sample standard deviation
will be identical (at least as can be distinguished by an F-test). When out of control, the values will be distinctly
different, perhaps by a very wide margin." Jim Parnella
"Cp and Cpk are for computing the index with respect to the subgrouping of your data (different shifts, machines,
operators, etc.), while Pp and Ppk are for the whole process (no subgrouping). For both Ppk and Cpk the 'k'
stands for 'centralizing facteur'- it assumes the index takes into consideration the fact that your data is maybe not
centered (and hence, your index shall be smaller). It is more realistic to use Pp & Ppk than Cp or Cpk as the
process variation cannot be tempered with by inappropriate subgrouping. However, Cp and Cpk can be very
useful in order to know if, under the best conditions, the process is capable of fitting into the specs or not.It
basically gives you the best case scenario for the existing process." Chantal
"Cp should always be greater than 2.0 for a good process which is under statistical control. For a good process
under statistical control, Cpk should be greater than 1.5." Ranganadha Kumar
"As for Ppk/Cpk, they mean one or the other and you will find people confusing the definitions and you WILL find
books defining them versa and vice versa. You will have to ask the definition the person is using that you are
talking to." Joe Perito
"I just finished up a meeting with a vendor and we had a nice discussion of Cpk vs PPk. We had the definitions
exactly reversed between us. The outcome was to standardize on definitions and move forward from there. My
suggestion to others is that each company have a procedure or document (we do not) which has the definitions
of Cpk and Ppk in it. This provides everyone a standard to refer to for WHEN we forgot or get confused." John
Adamo
"The Six Sigma community standardized on definitions of Cp, Cpk, Pp, and Ppk from AIAG SPC manual page
80. You can get the manual for about $7." Gary
"Cpk is calculated using an estimate of the standard deviation calculated using R-bar/d2. Ppk uses the usual
form of the standard deviation ie the root of the variance or the square root of the sum of squares divided by n-1.
The R-bar/D2 estimation of the standard deviation has a smoothing effect and the Cpk statistic is less sensitive to
points which are further away from the mean than is Ppk." Eoin
"Cpk is calculated using RBar/d2 or SBar/c4 for Sigma in the denominator of you equation. This calculation for
Sigma REQUIRES the process to be in a state of statistical control. If not in control, your calculation of Sigma
(and hence Cpk) is useless - it is only valid when in-control." Jim Parnella
"You can have a 'good' Cpk yet still have data outside the specification, and the process needs to be in control
before evaluating Cpk." Matt