You are on page 1of 18

AASTHA VYAS-12001 ABHISHEK C PANDURANGI-12002

Survey of 1200 employees found that 25% of US workers believed that they could accomplish at least 50% more on the job each day. Barriers to Productivity
Not supervising the work-37% Not involving in Decision Making-34% Not rewarding for performance-29% No Promotion Opportunities-29% No training-28% Not Hiring right people-26%

A survey of 231 HR specialists reported the following for Compensation and Benefits
Managing Performance-12.12% Team Based Pay-10.39% Competency Based Pay-9.96%

ECS/Watson Wyatt Data Services studied 1500 companies. 75% use variable pay for middle managers Two-thirds was given as annual bonus 26% of supervisors were given cash rewards like bonus

Edward Perlin Associates surveyed 63 companies with data-processing professionals Average Annual Compensation including salary and bonus for:
Security Managers-$79,900 Security Heads-$94,800 Security Specialists-$42,900

Survey of 4800 members of Institute of Management Accountant reported Average Annual Salary in the 30-39 year age bracket
$57,397 for certified management accountant $47,332 for uncertified management accountant

Average Annual Salary in the 19-29 year age bracket


$40,185 for certified management accountant $31,008 for uncertified management accountant

A survey of 1935 internet workers by Association of Internet Professions Average Annual Salary for
Online Services Manager-$59,781 Software Development Person-$64,024 Media Production Person-$42,455

How can the national average salary for computer security positions be estimated using sample data? How much error is involved in such an estimation? How much confidence do we have in this estimation?

n=63, Assume Standard Deviation=$5,500 Mean=$79,900 Confidence Level=95% Thus confidence interval=($78,542, $81,258) The point estimate would have an error of $1358

How does a business researcher use sample information to estimate population parameters like the mean national average annual salary for accountants? For internet workers? What is the error in such a process? Are we certain of the results?

A survey was conducted to estimate the population parameters Point Estimate has been used to estimate the mean national average salary A point estimate is as good as the sample, if random samples are taken the estimate may vary. An interval estimate is preferred over point estimate.

One survey reported that 37% of the workers felt that companies are not supervising enough. This figure came from a survey of 1200 employees and is only a sample statistic. Can we say from that 37% of all employees in the US feel the same way? Why or why not? Can we use the 37% as an estimate for the population parameter? If we do, how much error is there; and how much confidence do we have in the final results?

n=1200 Confidence Level=95% P=.37, 1-P=.63 Confidence Interval=(.343,.397) Error=.027 or 2.7% It can be not assumed to be same for the US population

A survey of 231 HR specialists found that 12.12% cited that managing performance is the dominant issue facing compensation and benefits managers. Can this figure be used to represent all compensations and benefits managers? If so, how much potential error is there in doing so. Since this sample size is 231 and the sample size for the employee survey on productivity uses a sample size of 1200, does the accuracy of the results of two surveys differ? Does the size of the sample enter into the predictability of a surveys result?

n=231 Confidence level= 90% P=.1212 or 12.12% Confidence interval= (0.0862, 0.1562) Error=0.035 Increasing the ample size to 1200 will bring the error down to 0.015, and the accuracy will differ. Thus, increasing the sample size will reduce the error.

Why did these research companies choose to sample 1200, 231, 1500, 63, and 4800 firms or people respectively? What is the rationale for determining how many to sample other than to sample as few as possible to save time or money? Is a minimum sample size necessary to accomplish estimation?

Size of acceptable error and level of confidence should be taken into confidence Size of acceptable error and size of sample are inversely proportional Considering large samples would incur high costs. To minimize costs, highest error tolerable should be taken into account.

Thank You

You might also like