You are on page 1of 87

News and stories

The State of Operations Research (ORMS Today, February 2006, Volume 33, Number 1)

The State of Operations Research (ORMS Today, February 2006, Volume 33, Number 1)
Published by Ioannis Anagnostakis - PDM on 30/01/2007

New editor in chief re-visits mission, scope and coverage areas of profession's "flagship journal." The State of Operations Research New editor in chief re-visits mission, scope and coverage areas of profession's "flagship journal." By David Simchi-Levi

The introduction of a new editorial board is a unique opportunity to evaluate the reputation, stature and health of the journal. It is also an opportunity to examine its mission, scope and coverage areas, the types of papers submitted and published, as well as the level of satisfaction of authors and readers. Indeed, this is the time to identify what works well, what needs to be improved and what requires significant change in direction and emphasis. Similarly, it is an opportunity to reflect on changes in the profession and society that should influence the journal. This is exactly the objective of my editorial. To help in this process, I reviewed the results of a Web-based survey of subscribers to Operations Research conducted by INFORMS, and I complemented this data with interviews and discussions with numerous people in academia and industry. The message I received is consistent and unambiguous: Operations Research is, and has been for more than 50 years, the flagship journal of the profession. The journal has an excellent reputation for publishing high quality papers and, together with Management Science, has served as the primary outlet for scientific research in the field of operations research. Nevertheless, important challenges exist. In the early days of the field, the focus was mostly on the development of quantitative methods to solve operational and managerial problems. More recently, the field has matured and while new or more effective methods are still of interest, the emphasis of current research has shifted toward solving more relevant problems. This shift entails expanding the scope and coverage of the journal so that it reflects and possibly influences the evolution of the profession. At the same time, we have seen the proliferation of scientific journals focused on quantitative models and methods in various, often more specialized, operations and management areas. Many of these journals have been introduced by INFORMS in the last three decades: they include INFORMS Journal on Computing, Manufacturing & Service Operations Management, Mathematics of Operations Research, Marketing Science and Transportation Science. Other O.R. journals, such as Mathematical Programming, Operations Research Letters, Naval Research Logistics and Networks, compete in a similar marketplace. While this proliferation

creates opportunities for authors, it demands the clarification of the scope and mission of Operations Research, the flagship INFORMS journal. Mission and Scope

The mission statement of the journal is "to serve the entire operations research community, including practitioners, researchers, educators and students." Thus, the scope of Operations Research must be broad enough to cover both methodology and applications, yet restricted to high quality, truly insightful papers. This is clearly an important distinction between Operations Research and other more focused journals sponsored by INFORMS or its subdivisions. Indeed, with the exception of Management Science, all other journals published by INFORMS attract papers that are of interest to a specific community. This is not the case for Operations Research as it has always emphasized the publication of papers that are of interest to more than a small portion of the society. Evidently, such a broad scope does not distinguish Operations Research from another important journal in the field, namely Management Science. However, in the last few years, Management Science has emphasized "research motivated by strategic issues," as well as research important to a practicing manager. By contrast, I believe Operations Research should attract and publish papers focusing on the science and engineering of operations. Specifically, the science of operations refers not only to contribution to theory and the development of new methods, but also to analytical frameworks, quantitative relationships and mathematical models, some of which may provide only insights into various problems not necessarily specific numerical solutions. Two good examples in this category include the celebrated Little's Law from queueing theory and the more recent literature on supply contracts illustrating the impact of risk sharing between suppliers and buyers. By the same token, the engineering of operations focuses on solving specific operational problems and hence requires real data and demands the development of computationally tractable algorithms. Examples here include algorithms for the design of telecommunications networks or for the design and operations of supply chains. In this case, the emphasis is on a new solution methodology that is practical and effective in solving real-world problems. The implications of the previous statements are clear. I would like to see Operations Research attracting and publishing high quality managerial or technical papers that are based on rigorous mathematical models. Such papers should demonstrate potential impact on practice. Thus, the journal is interested in papers that focus on one or more of the following dimensions:

define new problem domains for the field; introduce innovative concepts and mathematical formulations of problems; provide new insights into operational problems; develop new methodologies to approach known and new problems; and apply operations research methods in a creative way to interesting application areas.

At the same time, the journal is looking for papers that are of interest to the entire community. These include:

survey papers that summarize and update readers on the current state of the art of a major topic in operations research; important historical surveys and overviews of the profession and its intellectual heritage; and position papers suggesting new research directions for the profession or analyzing and critiquing current trends.

Areas of Coverage

The perception of the journal in the community obviously affects the type and breadth of papers submitted. At present, there seems to be some perception that Operations Research is too focused on technical contributions and that some areas of interest to the community are not covered by the journal. My objective is thus to broaden the journal content, and consequently the field, by publishing material that covers the entire

spectrum of problems of interest to the community and by identifying new and emerging areas. In fact, Larry Wein, the former editor in chief, started this process by introducing the area of Financial Engineering. The new editorial board will continue in that direction by introducing two new areas, Revenue Management and Marketing Science. Both areas have a long tradition in the field and use rigorous mathematical models to improve decision-making. It is true that even without the existence of these two areas, papers in revenue management and marketing science have been submitted and published in the journal. However, by providing these two areas with a more visible place on the editorial board, and by selecting first-class researchers to lead the areas, the journal will encourage more submissions in these domains. The second initiative is to reposition and broaden some of the existing areas of the journal. For example, "O.R. Chronicle" is now replaced by "O.R. Forum," an area long abandoned by the journal. Re-introducing the O.R. Forum will allow the journal to expand its horizon and attract not only historical essays, but also thoughtful and substantive position papers that may suggest new research directions for the profession or reflect on current trends. With the introduction of the Web-based discussion forum (see below), I envision a lively and important interaction between the journal's readers reflecting various points of views. Similarly, the board is expanding the military area so that it also covers the growing literature that applies operations research techniques to homeland security. This is of course reflected by the area title, Military and Homeland Security. Thus, while the area is still interested in papers that address classical military problems as well as current defense issues, it encourages authors to submit papers focusing on the war on terror or on large-scale disasters such as earthquakes and hurricanes. Finally, the board is repositioning the area of Computing and Decision Technology that for many years served as an interface with the computer science community. The area, now titled Computing and Information Technologies, is to encompass research of a computational nature that lies at the boundaries between operations research and fields not obviously covered by the journal's other areas. Thus, the area is not only interested in advances in computational approaches for solving complex problems, and associated decision support interfaces, but also in the application of operations research approaches and techniques to computational biology, information system design, learning theories, nanotechnology and complex systems analysis. To summarize, the journal now has the following 16 areas:

Decision Analysis Environment, Energy and Natural Resources Financial Engineering Manufacturing, Service and Supply Chain Operations Marketing Science Military and Homeland Security O.R. Forum Computing and Information Technologies O.R. Practice Optimization Policy Modeling and Public Sector O.R. Revenue Management Simulation Stochastic Models Telecommunications and Networking Transportation

Taken together, these areas span the entire spectrum of research in our community. They build on areas where the journal has already had significant strengths such as Decision Analysis, Optimization, Stochastic Models, and Manufacturing, Service and Supply Chain Operations. At the same time, they are designed to stimulate research and submissions in emerging areas such as Financial Engineering, Revenue Management and the application of operations research to other sciences through Computing and Information Technologies. Encouragement of New Areas of Study

An important objective of the new editorial board is to encourage new areas of study. Specifically, the board believes the field and the journal have the opportunity to significantly impact new areas, which have not traditionally been explored. Indeed, some of the areas that can attract the attention of academia, industry and government and where the journal can play an important role include: data mining, operations in service firms and organizations, homeland security, risk management and emerging technologies such as RFID. At the same time, new methodologies such as robust optimization and approximate dynamic programming, suggest creative ways of solving a variety of classical and new problems. Of course, the challenge is to provide a vehicle that helps to attract new research areas. For this purpose I am planning three activities: 1. Publish special issues and surveys that will serve as a source for defining and highlighting the state of the art in emerging areas. 2. Invite experts from other fields to submit review articles on the use of rigorous mathematical models in their respective domains. The objective is to expose the operations research community to non-traditional applications and thus stimulate our own research and broaden the field. 3. Develop a Web-based discussion forum that will feature an important paper and facilitate comments from readers of the journal. The editor will screen comments before they are published on the Web site. The new discussion forum is scheduled for release together with the new journal Web site at the beginning of the year. The Editorial Board and Publication Criteria

The role of the area editors is to achieve the editorial mission in their corresponding areas and to serve as a board whose objective is to move the journal and the field of operations research forward. I have sought for the editorial board a group of firstclass researchers having significant practical experience. These are the people responsible for maintaining the high acceptance standards of the journal. The high standards that Operations Research strives for need to be clearly articulated to the community. This will help eliminate false expectations, reduce pressure on the review process, and ultimately improve the experience that authors and readers have. For this purpose, the new board has identified the basic questions that every paper in the journal should address. These include:

Is the problem important? Is the research interesting to a wide range of people in the field? Does the paper have the potential to make an impact on practice?

At the same time, like many other scientific journals, every paper should stand the test of questions such as:

Is the paper intellectually deep? Can the paper stand the test of time? Is the analysis correct and rigorous? Is the paper well-written?

Of course, even with these criteria, there is significant room for interpretation. Hence, maintaining uniform standards across all areas, perhaps with the exception of O.R. Practice and O.R. Forum, will not be easy to achieve. Thus, the new board has agreed to hold quarterly meetings to review the status of the journal and to identify new initiatives that the journal should take. In parallel, the board has agreed on quarterly updates to all associate editors so they are aware of the journal status and standards. These

updates will include information on submission rates to the various areas, rejection rates and most importantly, the review cycle times. The data will allow area editors and associate editors to measure their performance against the review cycle times of others. Finally, I am complementing the editorial board with a new advisory board whose objective is to discuss major issues important to the journal, for example, new areas of coverage or the impact of the Internet on the journal. I am pleased to report that the following distinguished members of the community have accepted my invitation to serve on the board: Patrick T. Harker, Hau L. Lee, Thomas L. Magnanti, George L. Nemhauser, William P. Pierskalla, Donald H. Ratliff and Ward Whitt. Identifying Big-Impact Papers

An important challenge faced by the editorial board is to encourage and identify "big-idea, big-impact" articles. These are the papers that motivate new lines of research and are extended or built upon by others. Typically, these are highly cited papers but often their importance and impact is recognized only a few years after publication. Evidently, the review of submitted papers is subjective, and its quality depends on the referees, the associate editors and the editorial board. Unfortunately, rejecting papers on purely technical ground is straightforward and is considered a sign of high standards, while identifying big-impact papers is very difficult. My objective is thus to make sure that in our efforts to maintain and increase the quality of the journal, Operations Research will not lose opportunities to publish papers that make a difference. To help identify and publish these papers, I am offering the following guidelines: Authors should recognize that well-written, concise papers that clearly identify the contribution of the research will be easier for the editorial board to review. This has the added benefit of shortening the review process and allowing other authors to cite the published paper much sooner. The editorial board should take risks when necessary and be flexible when appropriate. That is, I prefer that the editorial board will identify, encourage and help papers that have potential to make a big impact on the field rather than reject papers prematurely. This implies that the role of the editorial board is not only to control the process, but most importantly, to look for papers that make important contributions to the science and engineering of operations. Reducing and Managing Delays and Backlog

One important concern of many authors is the review cycle time and the backlog of accepted papers. I strongly believe that shortening the review process is possible, but requires the editorial board to closely monitor this process and recognize the work of exceptional associate editors and referees. It also requires penalizing consistently delinquent referees and associate editors. The stated objective of Operations Research is to complete the review process within four months. The area editor can, however, make exceptions to this four-month cycle time for papers that he or she deems too long or complex. Such papers may require more time for careful reviewing. Thus, our goal is not rapid turnaround, which can always be achieved with short and uninformative reviews. Rather, the goal of Operations Research is to provide authors with high-quality reviews of their papers in a timely fashion. To achieve this goal, we plan to use Manuscript Central for online submission and peer review. My experience with this system, which has been recently launched by the journal, is that it will help to significantly increase the number of submitted papers and significantly decrease the review cycle time. Indeed, Manuscript Central will help the area editors and the editor in chief track the review process, alert the associate editor and the area editor when a paper has been with a referee for a long time, and supply various statistics to the editorial board. In parallel, Operations Research is going to limit the number of published pages. Anything beyond the page limit will be published online. Thus, while there will be no limit on the length of a submitted paper, there will be a limit on the number of pages published in the printed version. This limit, not including graphs and tables, is 30 pages in length, double-spaced with one-inch margins on all four sides of the page and with text in 11-point font. If a paper is accepted, all submitted material beyond this limit will be published in the online supplement of the journal. This initiative is designed to force authors to focus on shorter, concise and wellwritten articles that appear in print. Such articles are easier to review which helps reduce the review cycle time. Similarly, this limit will help reduce the publication backlog. The length of the publication backlog is an important concern. On one hand, having a healthy backlog guarantees a smooth production and publication process. On the other hand, when the backlog is too long, and currently it takes 12 months from acceptance to publication, it may deter authors from submitting papers. In addition, the combined delay associated with the review process and the publication backlog, while not acceptable in general, is not appropriate in particular for topics such as energy, policy modeling or homeland security. INFORMS has agreed to increase the page count of the journal by 20 percent in the next two years, which will hopefully reduce the publication backlog to six months. Final Words

Operations Research has been the flagship journal of the profession for more than 50 years. It has an outstanding reputation due to the work of many people, but in particular former editors in chief George L. Nemhauser, William P. Pierskalla, Thomas L. Magnanti, Donald H. Ratliff, Patrick T. Harker and Lawrence M. Wein. I am honored to follow in their footsteps and build on what has already been achieved through hard work and commitment to the journal. The initiatives described in this document are meant to build on their success and help make the journal an even better outlet for authors and a more exciting place for readers.

David Simchi-Levi is the editor in chief of Operations Research. This article was published in Operations Research (Vol. 54, No. 1) and is reprinted here with permission.

News and stories


Amateur Operations Research (ORMS Today, August 2006, Volume 33, Number 4)

Amateur Operations Research (ORMS Today, August 2006, Volume 33, Number 4)
Published by Ioannis Anagnostakis - PDM on 30/01/2007

Teaching O.R. to execs: a threat to the profession or an opportunity to promote it? Amateur Operations Research A threat to the profession or an opportunity to promote it? By Peter C. Bell

We need more operations researchers; there are far too many issues and problems out there for the current inventory of professional operations researchers to be able to address. One solution to this problem is to train more people to actually do operations research. Some of these trainees should have degrees in O.R. and be trained as O.R. generalists, but there is a second possibility: Perhaps we could train well-educated people to think like O.R. practitioners and apply some of the basic tools of O.R. under limited circumstances? If these "amateur" O.R. practitioners could demonstrate the benefits of O.R., this might encourage their employers to hire professional O.R. people, thereby promoting the profession.

Here is one possibility: Could we train well-educated, bright people who occupy a management position in an organization to examine their work environment through the lens of O.R.? These "students" would not need exposure to the full portfolio of O.R. tools and techniques. First, they are busy people and will not have time for our full educational package. Second, the great majority of the content of the traditional O.R. course will be totally irrelevant to their particular work environment. Dan Elwing, when he was CEO of ABB Electric, said that O.R. is "not a project or a set of techniques; it is a process, a way of thinking and managing" (Interfaces, January-February, 1990). The hypothesis is, therefore, that a managerial audience can obtain immediate value from exposure to a few of the key ideas that make up the O.R "way of thinking and managing." To test this hypothesis, we first have to lay out a set of O.R.'s key concepts that will be useful for this audience. My version of seven of these key ideas has been published earlier in OR/MS Today (August 2005, pp. 22-27 http://lionhrtpub.com/orms/orms-805/freveryone.html) and will not be repeated here. Other O.R. instructors may have different lists of the ideas that make up the O.R. lens and that's fine; the point is that there are these extremely useful basic concepts that are well known within the O.R. community but virtually unknown to the outside world. If O.R. people do not expose students to these ideas, no one else will. Once we have a list of the key concepts, we must find an audience. Many business schools (including the Richard Ivey School of Business) have Executive MBA (EMBA) programs that attract successful, experienced managers, most of whom have never been exposed to O.R. These programs provide an opportunity to judge whether managers who are in a position to implement change can use the O.R. "way of thinking and managing" to create immediate benefit for their employer and, perhaps, themselves. What follows is not an elegant statistical test, but rather a summary of several years of experience teaching Ivey EMBAs. The EMBA students take a course (seven half-day classes) on "Management Science and Statistics" and after the course are required to do a project in their own organizations where they are required to demonstrate the use of some of the tools/concepts from the course. The project counts for about half the course grade and students are told that, "an excellent project will include some feedback from the 'decision-maker' on the usefulness of your approach, or demonstration of a meaningful organizational change prompted by your analysis." The project is a key component in achieving my objective for this course, which is to provide EMBA students with skills that are both new and immediately useful at their place of work. A couple of comments taken from the free-form section of the teaching ratings suggest that students are getting the message:

"I have learned invaluable tools that I have put to immediate use and that have really changed the way I think about business." "Management Science was nothing short of a wonderful course, and I have learned a tremendous amount in just a relatively short time. In fact, I now see so many applications everywhere...." "I have always contended that there are two types of people in the world: "words" people and "numbers" people. I counted myself among the former. Management Science has changed all that. Now I say, 'Show me the data and I'll 'show you the money!' "

Here are just a few examples of course projects undertaken by students in the Ivey EMBA program:

Demand at a major transportation company exceeded the ability to ship Monday-Thursday while there was expensive excess shipping capacity the remainder of the week. The business strategy of this company was entirely based on fixed origin-destination prices. The O.R. lens suggests that time is often an important element of product demand: For this company, shipping Monday-Thursday might be viewed as a different product from shipping Friday-Sunday. The fix: raise equivalent origin-destination prices Monday-Thursday and reduce prices Friday-Sunday. Result: demand better matched capacity and tonnage shipped and revenues both went up. (This same issue has been addressed a number of times in various shipping/delivery companies with several appearing to be quite successful.) A major producer outsourced all its packaging, ran an auction to purchase boxes and cartons, etc. and received itemby-item bids from several suppliers. The purchasing department first selected the winning bids based on low cost/item, but then imposed some aggregate constraints to ensure that no supplier receive too great an increase or too large a decrease in business from the previous year. It took quite a bit of "jiggling" to find a solution that met the aggregate constraints while trying to stick to low bid per item. Purchasing was happy with the resulting winners and the resulting total packaging cost of about $30 million. The O.R. lens suggests that choosing winning bids with aggregation constraints is a simultaneous decision problem and that humans are very poor at solving such problems intuitively. Excel Solver, however, can do this brilliantly. The result: Standard Excel Solver produced a solution that reduced cost of packaging by more than $2 million. Implementation was trivial: the purchasing department just had to assign different products to different suppliers. The owner of a chain of shops selling musical instruments thought it was really important to customer service to have one of every instrument in stock in each store. The O.R. lens suggests that this business strategy likely has two problems: high inventory carrying costs and poor service. High cost because the less popular instruments rarely sell

and poor service because a stocking level of one for a fast moving item will result in many stockouts. The fix: do some rough demand forecasting to identify those instruments that sell, remove 75 percent of the instrument SKUs from each store and stock multiple units of the remaining SKUs. The result: a financial turnaround and a nice gift for the instructor (after the grades were in)!

A manufacturer of a product used by emergency services had two months during the year (not consecutive) when demand for its major products was "through-the-roof" and three or four months where demand was hard to see at all. This made manufacturing scheduling difficult, led to high inventory costs and put a premium on demand forecasting. The O.R. lens suggests that pricing can be used to help out the supply chain - in this case price might be used to smooth demand. The fix: The CEO of the company estimated two (price, quantity) points for each major product each month and these pairs of points were used to estimate demand curves for each product each month. A Solver model was used to find revenue maximizing prices that met manufacturing capacity constraints. The CEO (who helped build and run the model) looked at the Solver output carefully, did some sensitivity analysis and made some adjustments to prices. He reported that revenue was up nicely but that demand had slipped back a bit more than he expected, but he thought the slippage was probably just randomness and wasn't too worried about it. The inheritor of some gold claims near an established mine was interested in whether the properties had any value. The O.R. lens suggests that the value of a gold claim is highly uncertain since it depends on many parameters that are themselves uncertain, including how much gold the property contains, the cost of extraction and the price of gold. For the claims in question some 15 parameters were involved and each one was uncertain. An event simulator was constructed in Excel and used to obtain a distribution of the value of the claims. The outcome: claims that are, sadly, not worth much at today's price of gold, but an Ivey case that will help to teach future students the value of risk and decision analysis. A manufacturer was closing a plant and needed to migrate customers from the closing plant to the remaining plants. They had collected extensive shipping cost data but needed a student project to show how to use Solver to assign customers to plants in order to minimize production costs while not overloading any plant. A private school was used during the summer to earn some much needed revenue by running summer camps. Some weeks were very popular and sold out quickly; some were less popular and had unsold places. The O.R. lens suggests that there does not need to be the same price for each week and that pricing high when demand is high and pricing low when demand is low can have two useful effects: increasing capacity utilization and enhancing revenues. The implementation plan called for gradual price changes so as not to upset the paying customers.

This list could go on; a great many of these projects appear to have benefited the client and have received supportive comments from "management." In many cases, the students had the authority to implement the results of their own work. Of course, there are caveats: Verification of the impact of these O.R. works would certainly not meet the Edelman Prize judges' standards, and I have very limited information on the "stickiness" of these solutions. Nonetheless, this body of evidence does suggest some lessons on the effectiveness of O.R. done by "amateurs":

Minimal exposure to some basic concepts of O.R. can lead intelligent people in management positions to view their work environment differently and can lead to the identification of opportunities for bottom line benefits. In many cases, these opportunities can be taken up with nothing more than the intelligent application of a spreadsheet model. In some cases, some technical help may be required (such as "Solver won't converge!"). Implementation, often seen as a major stumbling block for O.R. work, can be really easy, particularly when the decision-maker does the work It is relatively easy for an amateur O.R. person in a management position in a fairly large organization to add a few million dollars to the bottom line. These benefits sometimes result from finding the time to study a persistent issue, but almost always the effectiveness of this study can be greatly enhanced by knowledge of some basic O.R concepts.

The success of projects such as these might lead one to believe that these organizations would see the need to hire some O.R. professionals to move the work forward and to identify and exploit other opportunities, but the evidence here is mixed. In a number of cases, contacts with vendors were initiated and, in a few cases, budget was even set aside to pay for an application (examples include supply chain optimization and on-line procurement). However, many cases that appeared very promising ran into difficulties that prevented any additional O.R. work. One major issue is the high cost of the transition from an "amateur" heuristic solution to an "algorithmic" professional O.R. solution. As an example, consider the transportation company (the first project example above, although there have been many similar projects) that had balanced capacity with demand by making some minor price changes and has seen revenues increase. Flushed with enthusiasm for "revenue management," senior management wants a repeat of this early success. Contact with a

vendor is initiated, the vendor assesses the situation and submits a proposal with a multi-thousand dollar price tag to install a revenue management system. Suddenly management sees a certain cost but uncertain benefits; the "low hanging fruit" has already been picked, and management no longer has to deal with the pressing issue of a mismatch between supply and demand. Without this strong motivation, and in the absence of clear benefits, the project is unlikely to proceed. A second issue is the relative difficulty of accessing the O.R. marketplace. If a firm wants to hire a lawyer or an advertising agency, management knows where to look and what kind of specialty to look for. However, most firms wanting to hire someone to do some O.R. work have no idea what type of person to look for or where to find such a person, so they call their O.R. instructor and ask for help. The instructor can suggest outsourcing the O.R. or hiring someone to do the work in house. Outsourcing will likely run into the kind of "sticker shock" issues raised above, but would hiring someone be better? To be successful, an O.R. person starting work in an organization with no previous history in O.R. would need to have a bit of experience, would need to understand business and managers, and would likely have very little opportunity to do any "real" O.R. for some time. How do you find such a person? There are very few of them. I have suggested to students in other Ivey programs that there appears to be an opportunity to set up O.R. consulting shops that provide general, but not overly technical, O.R. assistance to multiple clients. A number of groups of graduates have taken up this challenge and have started companies based on this business model. I have sent EMBA clients in their direction, sometimes successfully. These firms, however, quickly change direction. It turns out that $20,000-$50,000 projects are relatively easy to land, but it takes a lot of effort to get enough of them to prosper and grow. It is much easier to latch on to one client and keep working with that client until the group becomes an application area specialist and then to market this specialty to other similar clients. As a consequence, these general practice O.R. consultancies are short-lived. In conclusion, I would like to offer two lessons I have learned from this EMBA teaching experience. The body of O.R. work contains some concepts that most of the world does not know about or understand. We can educate intelligent people relatively easily so that armed with these concepts and, perhaps, a spreadsheet, these amateur O.R. practitioners can initiate profitable changes in their work environment. One might think that success at the amateur level would prompt firms to invest in professional O.R. work, but firms trying to make this jump run into difficulties, and even minor hurdles at this stage will likely prove fatal to any future O.R. work. One path toward promoting the profession (following the objectives of the INFORMS "Science of Better" campaign) is to try to make it easier for firms looking for O.R. help to find suitable people. Again the INFORMS "Finding an O.R. Professional" directories (http://www.scienceofbetter.org/find/index.htm) provide much useful material, but this is a very complex marketplace, and it seems as if a good deal of human input is required to help management through this step. Amateur O.R. can be an opportunity for the profession, but there appears to be a disconnect between a successful amateur O.R. project and the decision by the organization to invest in an O.R. capability. The O.R. profession should think more about how we can make it easier for organizations that have become aware of the value of O.R. to make the transition to becoming O.R. organizations.

Peter C. Bell (pbell@ivey.uwo.ca) is a professor at the Ivey School of Business, University of Western Ontario. Bell was the 2005 winner of the INFORMS Prize for the Teaching of OR/MS Practice.

News and stories


Business Plan Competition 2007: The 'Eight Great' Make Their Pitch

Business Plan Competition 2007: The 'Eight Great' Make Their Pitch
Published by Holly Brekken on 20/05/2007

In a perfect world, there would be faster computers, less lower back pain, more accurate ways to detect the warning signs of a heart attack and even better-fitting business attire for female executives. And that would mean more comfort and more time for enjoying the sweet things in life, like a gourmet chocolate bar. In a perfect world, there would be faster computers, less lower back pain, more accurate ways to detect the warning signs of a heart attack and even better-fitting business attire for female executives. And that would mean more comfort and more time for enjoying the sweet things in life, like a gourmet chocolate bar. If the finalists in the 2006-2007 Wharton Business Plan Competition -- the "Eight Great," as they are collectively known -- are able to achieve their entrepreneurial schemes, the world would indeed become such a place. The finalists, who recently competed for more than $70,000 in prize money as well as the prestige of a competition gaining national attention, offered an array of proposed business ventures. The diversity of the business plans at this year's Venture Finals solidified the trend away from the Internet-oriented ventures that dominated the first two years of the competition, founded in 1998, when the word "entrepreneurial" seemed synonymous with "dot-com." This year, only two of the eight finalists were involved in any type of computer technology -- a high-speed semiconductor maker and an engineering software start-up. But even if the dot-com bubble is beginning to seem like ancient history, interest in entrepreneurship and in the Wharton Business Plan Competition, sponsored by Wharton Entrepreneurial Programs, seems higher than ever. (This year's winners, for example, were highlighted on CNN.com.) Indeed, the competitors represented the best of some 356 students and partners who entered the annual event earlier in the academic year. The eight finalists were chosen from a field of 151 teams; any team that included a Penn/Wharton student could participate. At the April 24 finals, the teams were given 20 minutes to present the highlights of their plan and to answer questions from the panel of judges. This year's judges included representatives of Johnson & Johnson, Adify, Schering-Plough, Norwest Venture Partners and Magic Sliders. The event brought together more than 200 entrepreneurs, venture capitalists, investment bankers, alumni, faculty and students. Finalists received a total of $70,000 in combined prizes, including $20,000 in cash for the grand prize winner -- who is also eligible for $10,000 in combined legal services. In addition, for the first time, the winner automatically gets to compete in the Draper Fisher Jurvetson Venture Challenge in New York City, with a shot at more than $250,000 in start-up money. This year, one entrant also made it to the semi-final round at MIT's business plan competition, while another was named an inaugural winner of the Wharton Venture Award (WVA), receiving $10,000 to encourage the venture's development over the summer. So who are the 2007 winners? In keeping with custom, we will first list the entrants in alphabetical order -- giving readers an opportunity to guess the outcome -- before we announce the results. Angiologix: This medical diagnostics venture says that it can offer something that many physicians and their patients would consider a significant advance: the ability to predict the likelihood of a heart attack or another major cardiovascular event before it occurs. Such a tool would be significant, since as many as 200,000 heart attack victims every year either have no warning signs at all or are misdiagnosed, according to Maria Merchant, the team leader. The medical advance of Angiologix is a test based on human coronary endothelial cells, which line the blood vessels and have been found to be the most important indicator of heart

10

attack risk. "In the future, we believe our test will be used in all patients after a certain age," said Merchant, who saw opportunities to partner with stent manufacturers and pharmaceutical firms. CircuMed Biopharmaceuticals: This bio-tech startup describes itself as offering a novel approach to treat thrombotic diseases, or blood clots -- which can cause heart attacks or strokes -- based upon a series of proprietary drug-delivery platforms. Increasingly, doctors use Tissue Plasminogen Activators, or TPA, to dissolve clots, but research has shown that these have high toxicity levels and can also dissolve what team leader Armen Karamanian described as "good clots," leading to dangerous and unnecessary bleeding. The delivery system pioneered by CircuMed, which relies on delivery of red blood cells, is more highly targeted, and as result can be less toxic and delivered over a longer period of time. The new venture is in the process of obtaining an exclusive license for this technology. Energetica: With gasoline prices soaring over $3 a gallon yet again, Energetica hopes to take advantage of an underused source of energy right here in America -- large amounts of biogas that can be extracted from landfills and wastewater treatment plants. Team leader Mat Peyron said that deriving energy from the methane contained in the biogas from these waste facilities is an undertapped, $600 million market, since currently only about one in five U.S. landfills are using this energy source. Energetica believes that its start-up has an advantage in the technology that was developed by team member Paul Tower, which has been used successfully for five years and which removes more contaminants from biogas than competing technologies. Foodilly Chocolate Factory: From Starbucks Coffee to Build-a-Bear stuffed animals to the Coldstone Creamery ice cream chain, there seems to be no limit to American consumers' willingness to spend large amounts of money and time on unique, customized retail experiences. This team of Wharton undergraduates, led by Michael Tolkin, believes it has found an unfilled need for upscale candy bars that customers design themselves in an interactive factory-style setting. Tolkin has said he was inspired by the movie Willie Wonka & the Chocolate Factory, and he and his partners were happy to pass around baskets of samples. The team also presented research showing that an array of customers -- from luxury seekers to young trend-setters to "indulgers" with little concern about calories -- would partake in the Foodilly experience. Nantronics: What could be more promising in today's marketplace than flash-memory integrated circuits that can operate at lower power and with much higher density than comparable existing semi-conductors? How about targeting these semiconductors to the fast-growing consumer market of China, which is the plan of Nantronics, a joint venture that has teamed Silicon Valley electrical engineer Frank Shi with Wharton MBA candidate Katerina Chi. This start-up already has an office in Shanghai and an agreement with a leading semi-conductor foundary, SMIC. Shi told the judges that "flash memory is the fastest-growing sector of the global integrated circuit market," and China is the hottest area within that sector. NP Solutions: One of the largest potential markets in the world of biotechnology is treating lower back pain, since many people begin to suffer some form of degenerative disc disease in their late 20s, and the vast majority of sufferers are not disabled severely enough for highly invasive and risky treatments, such as spinal fusion. But the spokesperson for the NP Solutions Team -- Neil Malhotra, a neurology resident at the University of Pennsylvania Medical School -- developed a much less invasive form of treatment, which involves a tiny injection of a hydrogel treatment into the affected disc. "Lower back pain is responsible for 15 million physicians' visits a year," Malhotra told the judges, in sizing up the large potential market for their innovation, which is slated to begin testing in animals. Tamara Kane: First-year Wharton MBA student Tamara Rajah believes she has discovered a large, underserved market for business attire among female executives in the United Kingdom. The biggest problem, according to Rajah, who was the only solo presenter in the Venture Finals, is one of tailoring and fit, since most executive clothing is designed for women with an hour-glass figure, even though there are at least six distinct female shapes. But that's not the only advantage her start-up, Tamara Kane, offers. Rahah said the company will use low-cost tailoring from Thailand or Malaysia, and that its business model is heavily weighted toward Internet shopping using customer measurements logged into its system. Vektor: Team leader Raymond Aranoff worked for a number of years at NASA's Johnson Space Center in Houston, Tex., where he was frustrated by the difficulties in integrating different types of engineering software and data management systems. "General Motors loses $1 billion dollars because of this problem," says Aranoff, who notes that software designers have tended to ignore the poor integration in industries such as automotive, defense contracting and aerospace. Vektor's business model is built around a web-based software system that collects and integrates data from multiple software platforms into one intuitive and customizable data base. Some of the advantages of Vektor include the ability to use the platform across multiple locations and with outside clients. And The Winner Is... NP Solutions, which was awarded first place, followed by Nantronics in second place and Energetica in third place. Foodilly Chocolate received the Frederick H. Gloeckner Award in Entrepreneurial Studies, given to the highest-ranking team comprised of at least 50% undergraduates. The boost for NP Solutions gives the team members hope that they can speed up the commercialization process for their lower back treatment, which still must undergo a lengthy approval process. "We're all very excited: We're going to try to get to the next point as quickly as possible," says Malhotra. The award illustrates the academic and intellectual diversity of the Penn community. Malhotra developed the idea for RejuvaDiscTM while treating patients during his medical residency, but after winning an initial grant from the Neurosurgery Research and Education Fund, he looked around for initial support. The winning NP Solutions Team includes Patrick Mayes, Jason Covy, Peter Buckley and Brian Bingham, who are PhD candidates in pharmacology at Penn's School of Medicine, and team leader Serena Kohli, a second-year Wharton MBA student. Meanwhile,

11

NP Solutions is continuing a recent hot streak in the Venture Finals for biotech, since last year all three of the top prizewinners involved medical solutions. Given the promise of NP Solutions, the team is hopeful that most of them can stay together, even though the pharmacology students still need to complete their degree. The team will get a chance to present their ideas all over again in New York, this time with $250,000 at stake. And, if history repeats itself, the start-up has a good chance of making it: Five of the six most recent Wharton business plan winners are still in business. The group will be dividing the money evenly, although Malhotra has already said he will re-invest his share in the venture. As for the rest of them, according to team member Mayes, "I think all of us have the idea that we'd like to stay with it in some fashion." http://knowledge.wharton.upenn.edu/article.cfm?articleid=1738#

News and stories


How Operations Research Drives Success at P&G

How Operations Research Drives Success at P&G


Published by Ioannis Anagnostakis on 01/03/2008

A story about how Operations Research and Data Driven Optimisation drives success at Procter & Gamble

How Operations Research Drives Success at P&G


by Andrew Hines You can't just call it a company anymore - it's more of an economy unto itself. With $76 billion in annual sales, 138,000 employees, and operations in more than 80 countries, Procter & Gamble, the world's biggest consumer goods company, has grown to such epic proportions that economists consider it a bellwether of consumer spending and confidence. Among the more than 300 brands it sells globally, from Gillette and Crest to Scope and Swiffer, 22 generate more than $1 billion in annual revenue. Another 18 pull in at least $500 million. Yet there's an entirely different element of P&G's success that doesn't show up on the balance sheet, and which figures into almost every key decision driving sales and profits - from choosing the right brand names to slap on new products to precise juggling of global inventories. The secret ingredient? Data - some 900 terabytes of total capacity, 50 TB more than Google searches every day - that P&G uses to measure and optimize almost everything it does. Three decades ago, P&G's cadre of data analysts was programming simplistic queries into mainframe computers to determine, for instance, the best time of day to deploy television advertising. It mostly trusted executives' instincts when deciding when to launch

12

a new product or how much inventory to put on store shelves. These days, thanks to exponentially more powerful computers, data retrieval and storage, and new generations of software, it's a central army of "quants" at P&G who are arguably as important to its overall success as those storied P&G brand managers. The company has raced to the forefront of data innovation in recent years, and has turned analytics - or operations research (OR), as it's more widely known - into a competitive edge that few others fully understand. As Brenda Dietrich, an IBM fellow at IBM's Watson Research Center, explains, "There's a gap between the math professionals and the nonmath executives in many companies. The companies who have people who can walk into a business meeting and tell executives how to use OR tools are the ones who've got the edge. Deployment is no longer done just by the math people; analytics has become much more usable by a broader set of people within an organization." At P&G, it's top quants like Glenn Wegryn, associate director of product supply analytics, who have quietly led the data revolution. Wegryn's team of 20 analytics pros combines enterprise-scale simulation and risk assessment software with in-house tool sets to help streamline supply chains, launch new brands, generate internal workflow models and tackle a host of other operational and organizational problems. According to Wegryn, P&G doesn't make any significant analyses on supply-chain structure without input from his team, since data crunching that can improve the slightest of margins in a company of P&G's size can generate huge dividends. "The consumer products industry is cost driven, and a lot of it is commodity type in nature," Wegryn explains. "So very efficient and effective supply chains are critical for success and the ultimate profitability of the company. OR techniques, when utilized effectively, save costs, reduce cash investments and inventory, and can even improve top-line growth." P&G, GE, Merrill Lynch, UPS - the list of Fortune 500 companies getting into the OR game is expanding, says Mark Doherty, executive director of the Hanover, MD-based Institute for Operations Research and Management Sciences (INFORMS), an OR think tank. "In the private sector, OR is the secret weapon that helps companies tackle complex problems in manufacturing, supply chain management, health care, and transportation," he says. "In government, OR helps the military create and evaluate strategies. It also helps the Department of Homeland Security develop models of terrorist threats. That's why OR is increasingly referred to as the science of better.'"

Rise of the Quants


The current analytics strategy at P&G took root in 1992, when Wegryn and a team of analytic professionals took on a daunting challenge: The company had too many manufacturing plants scattered around the country, and needed to eliminate redundant capacity, figure out optimal inventory holding policies, and develop other techniques that could optimize a supply chain that spanned continents. The data formulas Wegryn began churning through weighed myriad factors, including the impact of NAFTA on operations, trucking deregulation, and redundant capacity issues. The team, which included 30 managers and upwards of 1,000 employees around the country, spent a little less than a year devising tools that generated various consolidation scenarios. The team's recommendations ultimately allowed P&G to shut down multiple plants and have since generated more than $1 billion in cost savings. Small wonder, then, why mathematicians are in on business decision making in many companies, not just P&G. Entire companies today - Google, for one - are being built almost entirely on mathematical modeling. "We all know the slogan Intel Inside,'" says Vijay Mehrotra, professor of decision science at San Francisco State University. "But we don't automatically think, Is there OR inside?' And yet there is, in a staggering number of things. When you book a car with Hertz, and instead of saying, It's unavailable,' they say, It's available for $59, not $39' - that's OR inside. Today it's embedded in the way we do business."

P&G's Killer Apps in OR


Streamlining manufacturing plants was just the start. Here are a handful of other killer apps in OR that Wegryn has since developed and refined at P&G: New product branding: Several years ago, Wegryn used decision analysis techniques to help managers decide to use Crest as the brand name on Crest White Strips. Granted, that might seem like a no-brainer, but it was a complex decision because the teeth-whitening category was new - a situation in which a new, stand-alone brand name would perhaps make sense. P&G turned to their analytics team to sort matters out, and, as a result, Crest was chosen as the brand. Wegryn says the process involved "getting clear on the question, evaluating options, understanding the uncertainties, and analyzing the best decision that you have available. In the end it was decided to use the brand equity Crest had." Sourcing materials: Every product at P&G requires myriad materials, obtained from hundreds of different sources worldwide. Using OR techniques, Wegryn's team analyzes which source is optimal for every product. "A lot of times, there's service and quality considerations," Wegryn says. "We also measure whether a manufacturer really has the capability to deliver the materials at the quoted price." For instance, retail clients of P&G spend $140 million per year on in-store displays for P&G brands in the United States alone, often buying the display from one vendor. By using OR to determine the best source via a Web interface, P&G now pockets nearly $67 million annually in cost savings and has slashed the order-and-delivery cycle for store displays from 20 weeks to just eight. International trade and finance: P&G has ground operations in 86 countries, posing huge logistical and financial challenges. With products constantly crossing national borders, P&G is exposed to considerable exchange rate risk, where margins can be

13

squeezed by the tiniest movements in currency. Wegryn's group taps into software that helps predict optimal exchange rates and allows plant managers to shift production accordingly. "Let's say there's one plant in the US and one in Europe," Wegryn explains. "Based on the exchange rate, we will adjust where we're manufacturing and sourcing product from. It's not a massive adjustment, but just a slight adjustment to minimize exchange exposure and maximize the profit, ultimately, for the business." Inventory management: At giant-sized P&G, inventory management is crucial to overall efficiency. "How much inventory do I need, and where do I need to have it," explains Wegryn, "are really simple questions that are really hard to answer." Using OR, the company now fine-tunes inventory dynamics. For example, conventional wisdom once held that adding a new warehouse to a supply chain would always add inventory into the system as well, ratcheting up costs. But using analytic methods, Wegryn's team poked holes in this assumption, showing that new inventory need not be added. Their work was able not only to economically justify a new warehouse, but by using better methods, they were also able to track and put exactly the right amount of inventory in the system, reducing overhead costs. "The huge deal about this application of OR is that we've been doing it for 15 years," he says. "It's used in every area of P&G." Organizational design: Wegryn hasn't aimed OR's powerful lens on just strategic problems, but internal management challenges as well. Over the past few years, Wegryn has developed simulation models that help execs in each of the company's five major business organizations keep tabs on their organizational structure and inflow of talent. Taking into account variables such as hiring rates, attrition, retirement, movement between jobs, promotion rates, and so on, the quants created a "flow model" that shows managers what the likely flow of people moving in, out, and within an organization will be over the course of months or years, helping them to determine where they should be hiring most and when.

Toward "OR Inside"


One of the first myths about OR is that it applies only to operational issues. By every measure at P&G, however, OR is a crossfunctional discipline applied to anything from executive compensation to inventory management. Wegryn says his analytics group looks at every business problem and asks: "Is it a strategic problem, is it a structural problem, or is it an operational problem? We are called into various problems throughout the entire spectrum." P&G's OR tools fall into four broad categories, according to Wegryn: structured analytical modeling using a spreadsheet-type technology; decision-making analysis methods; mathematical modeling in the form of optimization; and simulation technology. Within those categories, Wegryn has subsets. One he calls "OR inside" - packaged tool sets from an enterprise vendor. P&G uses outside vendors for optimization software, simulation software, object-based simulation modeling tools, and risk assessment software for decision analysis. Wegryn believes that embedded analytics in commercially available packages are a baseline for any big company to stay in the game. "Our competitors utilize OR tools that are embedded in solutions, as we do, and that is simply to stay competitive." But "canned" OR doesn't come prepackaged with what Wegryn calls "company intelligence" - data that's specific to the nature of a particular company's problems and challenges. That's where "applied OR" comes in. Applied OR is project-specific, utilizing customized tools developed by the company's analytic team that target particular problems. "We have done analyses throughout the world on very specific questions," Wegryn says, "like what is the proper balance between capacity at a particular plant and the inventory it should be holding, to help responsiveness to our customers." Whatever the application, his team collaborates closely with members of P&G's IT team - the company's Global Business Services unit has several hundred employees operating in analytics alone - in order to get the answers to many of P&G's problems. Says Wegryn: "We develop the algorithms and mathematics inside, but as far as database and systems architecture and deployment and support, we defer to our IT colleagues." In the 23 years since she joined Big Blue, IBM's Dietrich has seen OR evolve "from data gathering that took months and months" to OR available on the desktop. "We can now deliver to executives software that, with a click of a button, can run models and present results. But it takes work to get OR embedded in daily business, and it takes people who can present OR to executives in a concrete way. The bottleneck in OR today is people - the industry is short on people who can deploy OR and frame it in a business context." And that is what gives Wegryn his unique status at P&G: He can talk business, and his business counterparts listen. "Rarely are we walking in front of a senior manager without any in-business support for the work we've been doing," he says. "We go off, we analyze options, we come up with a recommended plan, and then we present that to management. Do they throw us out of the office for screwball ideas? The short answer is no. We've developed a reputation of having an unbiased view of how the business operates, and we've earned their trust."

Posted by Ioannis Anagnostakis On 01/03/2008

News and stories


The Future of Operations Research

14

The Future of Operations Research


Published by Ioannis Anagnostakis on 01/03/2008

Lee W. Schruben is a professor and former chair of Industrial Engineering and Operations Research at UC Berkeley, and one of the world's leading authorities on simulation theory and practice. He has consulted widely in the high-tech and biotech industries, as well as in banking and in auto making. He spoke to BNET about the challenges ahead for OR and for those in the OR field.

The Future of Operations Research


by Andrew Hines BNET: What are the big unsolved problems for OR in practice today? Schruben: We have to realize that what we're actually doing is forecasting. We are trying to model what will happen in the future, and that's the biggest practical challenge - how to get away from models with static assumptions and develop predictive models that can respond in real time to changes in the world. BNET: What do you mean by static assumptions? Schruben: As it is now, we collect data and build a model based on assumptions we think are reasonable at the time. But our results only tell us what would have happened in the past, when those assumptions were valid. Most models assume that input data are independent and identically distributed (IID), but that's almost never true. Assuming IID data is assuming that events in the world don't depend on each other, and that the probability of them happening doesn't change over time. But things are changing constantly in business. BNET: So what is the right approach to OR modeling? Schruben: We have to integrate forecasting and risk analysis with OR modeling. We have to integrate models with dynamic market information and forecasting. Simulation is the workhorse to do this, because it can handle that kind of dynamic complexity, whereas most OR models tend to be optimization, static kind of models. BNET: Do you think it's accurate to say that OR is more of a theoretical exercise than a practical solution to business problems? Or is the practical application of OR techniques more of a defining factor now? Schruben: There's a lot of theoretical OR that has given the field a bad name. This comes from the "managerial insight" section of OR research papers. Most of the insights are either obvious or wrong. And these insights are often couched in such obscure terms that they confuse and disillusion managers. So the theoretical stuff tends to give the field a bad name. But the practical application of OR is the reason we're still in business. There's no question that OR in practice has made a huge impact on business. BNET: How much do you think packaged business analysis programs, like SAP or Oracle's ERP solutions, help or hinder the advancement of OR in business practice? Schruben: In order to compete, software companies have to say "all problems are solved by our software," which just isn't true. In that sense, packaged or embedded solutions are probably hindering OR in practice. A lot of out-of-the-box OR techniques are 20 years old. Innovation is largely coming from academic researchers, but unfortunately, a lot of these software companies don't

15

welcome academic input. In the ideal world, there would be a lot more collaboration. BNET: How do you see the role of OR in business changing over the next 10 years? Schruben: I'm hoping that managers become much more knowledgeable about analytics and OR. I see the education of new MBAs focusing much more on business analysis. MBAs need to be able to ask the right questions and develop a systematic way of thinking about problems. Learning particular analytic techniques alone won't get you very far, but the training will teach you how to discipline your thinking, how to ask the right questions and become a wise software consumer. Software vendors need to say, "Wow, we can't keep up with the MBAs."

Posted by Ioannis Anagnostakis On 01/03/2008

News and stories


10 ways to spark your creativity and jumpstart your business and sales

10 ways to spark your creativity and jumpstart your business and sales
Published by Jeffrey Tobe CSP on 18/05/2008

If you are successful in business today, you must already be somewhat creative. For many, creativity is an under utilized asset. Think about it: If you werent creative, would you be able to generate ideas for clients? Put together a great proposal? Keep customers happy and get new business? Probably not. But ponder this as well: If you were even more creative, would your sales increase? Almost certainly. Just what is "creativity"? There's no simple definition of creativity and no guaranteed way to maintain it at consistent, optimum levels. Researchers, mostly psychologists, claim that being creative means being "novel' and "appropriate" at the same time. I have been conducting creativity-in-business workshops for years and I am still not sure that I have the definition. I have found that the most creative entrepreneurs have three things in common:

Their willingness to question the norm. Their ability to see their businesses from their clients' perspectives.

16

Their adeptness at taking ideas from other professions and adapting them to theirs.

Not that difficult, right? Wrong! Creativity is a constant state of awareness and many are not willing, or are too involved in the everyday stuff, to devote the effort required in staying on top of the creative process. How 'bout a jumpstart? To help give you a jumpstart, here are 10 ways to spark your creativity and reinvigorate your business pizzazz: 1. Clear your mind. You don't have to chant a mantra in the lotus position to sweep your mind clean enough to let in new ideas. Simply take a physical break from your everyday hectic pace. Your body will take a break, but your gray matter won't. It's akin to downloading a program to your computer and walking away while it processes the information. "Subconsciously, you're working on your problem or goal - particularly if you've reinforced it," says David Heavin, senior marketing consultant and owner of Ideas In Action. 2. Stick to your strengths. Here's something I consciously decided: I gave up on being organized. I am just not good at that stuff. I have people with whom I work who are great at it and give me the freedom to do what I do best. I know how my mind works best and I know my strengths. 3. Look for a jumpstart. Most of the time, businesspeople can't wait for inspiration to strike. You need ideas now and you are hopelessly stuck. Try opening a dictionary and randomly selecting a word. Then try to formulate a solution incorporating the word. The concept is based on a little-known truth - barriers are actually opportunities to get you thinking. 4. Don't brainstorm, brainspark instead. I think that brainstorming implies a violent act that occurs when the wind blows in just the right direction. Brainsparking is the real goal - sparking an idea in yourself or others that will take you to the next level or solve a seemingly unsolvable challenge. The best brainsparking sessions involve bringing together a team of people with different personalities and thinking processes. They may not be from within your organization. Invite some business associates whose opinions you trust to lunch and go for it! Incidentally, it isn't always going to be a likable person who inspires you. Creative marketer Rick Segal mentions a woman who worked for him for 17 years. "I hated her," he says. "I didn't like working with her. But she was, without a doubt, one of my best employees who pushed me more than any of the ones I liked. She's one of the few people I actually remain in contact with." 5. Tear down barriers. Some obstacles to creativity include behaviors (yours and others) and physical reality. Do you work in a dreary space? Maybe your demanding schedule is so blocked you can't devote even 15 minutes to creative thinking. Fear and lack of confidence will kill the creative spirit, and self-criticism will shovel the dirt on top of your fragile new ideas. Why not set a new rule? Self-censorship is prohibited! Never allow one of your ideas to wither away because you think it's too crazy to say out loud. Just remember that every idea is a good idea and nobody should play the critic too quickly. Usually it is the weirdest ideas - when modified - that become the great idea in the end. 6. Create a space to be creative. Set up an environment that encourages creative output, a comfortable space within which you feel non-threatened and able to create. Many of us work in small offices or have a space at home to work. There's likely an unused shelf or corner that you can call your "creative space." Put two things there: 1) something that reminds you of when you were a child (as a reminder of when you were uninhibited, willing to take risks and unbelievably creative), and 2) something that symbolizes what you do for a living. Try to find something that brings a smile to your face. Make some rules for when you go into your creative space. One client shared that they have turned a small office into a "fun room" with inflatable chairs, a popcorn machine, black light, children's bookshelves and an easel with paper. The rules are no shoes or normal pens and pencils and no interruptions during creative sessions. 7. Question everything. Nothing can block creativity like "the status quo." If you're willing to question things, you will find out not necessarily what's been working for other people, but what will work for you. Those who ask the most questions - children - are in touch with their creative side and use their imagination freely. Mihaly Csikszentmihalyi, author and professor at the University of Chicago, says the most creative people live by the maxim, "Die young as late as possible." Creative people are "as curious, engaged and innocent as children. They keep asking questions, wrestling with interesting problems, and looking at the world through an ever-changing lens," he says. 8. Mine the past for ideas. Don't feel like you have to always blaze your own trail. What was done in the past should be used as inspiration. When you hear what has worked for others, don't discount it. Take the idea and ask yourself, "How can I apply that to what I am doing in my organization?" 9. Get outside your comfort zone. Leaving your comfort zone doesn't mean abandoning what's proven to work. It means adopting a new perspective, seeing a new place or looking to another industry for ideas. As you read this, try something. Take off your watch and put it on the other arm. Uncomfortable? Awkward? Yes, but if you wear it like this for the rest of the day, you will be reminded of all of those things that we do in our lives (personal and professional) that are strictly habitual - we do them because that's the way they have always been done in the past. I suggest to clients that they look outside of their "four walls" for ideas. Look at other professions and industries. See what they are doing to market themselves and attract and retain customers. 10. Know your customer. If you are truly offering your products or services in a creative way to your customers and they see you as a creative resource, you have to be asking a lot of questions. Creativity builds from interaction with your customers. How well do you know them? How well do you anticipate their needs? Can you sell them something unexpected if you believe that's what they need?

17

Your success is ultimately contingent upon your customers' successes. If they succeed, you succeed - and you don't ever have to worry about the competition.

Posted by Seena Koshy On 18/05/2008

News and stories


Guide to Creativity and Innovation for Small Business

Guide to Creativity and Innovation for Small Business


Published by Donna Fenn on 18/05/2008 You have to be extraordinary agile to keep up with today's constantly changing competitive environment. The best way to ensure lasting success is to make sure that everyone in your company is encouraged and rewarded for thinking and behaving creatively. Create a culture of creativity and innovation and you will: 1. 2. 3. 4. Continue to deliver products and/or services that are valuable to your customers Establish yourself as a market leader Build a more vibrant and engaged workforce Become an expert at identifying and acting upon great ideas

Action Steps
The best contacts and resources to help you get it done:

Encourage brainstorming and idea generation Innovation is essentially the introduction of something new - the concrete result of creative ideas. With the right tools, everyone can be taught to think creatively. Listen to your customers Your customers are often your best source of new ideas. Ask them how they'd like your product or services improved, but also master the art of observing them. Look for trouble spots; they're your opportunity for innovation. Develop systematic methods for evaluating new ideas Consider all ideas, even the ones that seem a little crazy. But make sure you evaluate them objectively before investing time and money bringing them to market. Protect new inventions Great ideas breed copycats. First, do a thorough patent search to make sure you're not infringing on anyone else's territory, and then register your own patent or copyright. Reward success (and failure!) Employees need to feel invested in innovation, so reward them for suggesting great ideas that fly and for 'failing fast' with ones that don't quite get off the ground.

Tips Helpful advice for making the most of this Guide

Customers can't always articulate their needs, so instead of asking them about your product or service, watch them using it and look for trouble spots. Educate employees about the rules of engagement for brainstorming (i.e. no negative comments!). If you've invented a new product, seek legal advice on patents and trademarks immediately. Great innovation breeds copycats, so stay one step ahead of the competition by innovating continuously, not just once. Prototype new ideas as quickly as possible to give them form and substance.

Posted by Seena Koshy On 18/05/2008

News and stories


18

Gartner Networking Maturity Model

Gartner Networking Maturity Model


Published by Hassnain Chagani on 26/08/2008

This diagram illustrates how Gartner describes the five phases of networking maturity that an enterprise undergoes.

Phase 1: Chaotic
The first phase, "Chaotic", is described as:

Ad hoc Undocumented Unpredictable Multiple help desks Minimal IT operations User Call Notification

Phase 2: Reactive
The second, "Reactive" phase is demonstrated by:

Best effort Fight fires Inventory Document problem management process Alert and event management Monitor availability (uptime/downtime)

At Phase 2, organizations are first able to maintain configuration.

Phase 3: Proactive
The third phase, "Proactive", is described as:

19

Monitor performance Analyze trends Set thresholds Predict problems Automation Mature asset and change management process

At Phase 3, organizations are able to perform service and account management.

Phase 4: Managed
At phase four, "Managed", is demonstrated by:

Define services Understand costs Set quality goals Guarantee SLAs Monitor and report on services Capacity planning

At Phase 4, organizations are able to perform service process engineering, and later in this phase, business management.

Phase 5: Optimal
Finally, the fifth, "Optimal" phase is characterized by:

IT and business metric linkage Class of service choice with pricing Policy-based configuration Self-provisioning tools IT performance business process Business planning

At Phase 5, organizations are able to perform "Profit" management.

Posted by Hassnain Chagani On 26/08/2008

News and stories


Standardize Your Process to Improve the Bottom Line

Standardize Your Process to Improve the Bottom Line


Published by Usha Varadarajan on 01/04/2009 Standardize your processes! You can save time, money and prevent errors. Things you do over and over should be done the same way every time, if indeed you do the task the best way. Standardization helps save time. As things become routine, a process is easier to do and is done more quick Full article http://www.businessperform.com/articles/standardize_your_process.html

Posted by Usha Varadarajan On 01/04/2009

News and stories


20

Learning to Run the Lean Marathon

Learning to Run the Lean Marathon


Published by Usha Varadarajan on 01/04/2009 Less than 20% of companies implementing any form of Lean related improvement programme manage to achieve worthwhile results. Effectively, 80% or more of companies fail to complete the Lean Marathon The key to success when doing is to start small and build up the improvements, rather than go for the kill and secondly realising that it is better to implement something now that is 75% successful, than to keep planning for a 100% success and then failing to achieve anything. Full Article http://www.businessperform.com/articles/lean_marathon.html

21

Knowledge Base
Problem Solving in Airline Operations (ORMS Today, April 2005, Volume 32, NUmber 2)

Problem Solving in Airline Operations (ORMS Today, April 2005, Volume 32, NUmber 2)
Published by Erik Andersson, Anders Forsman, Stefan E. Karisch, on 30/01/2007

Marrying end-user modeling and large-scale optimisation pays off at SAS and other carriers. Problem Solving in Airline Operations Marrying end-user modeling and large-scale optimisation pays off at SAS and other carriers. By Erik Andersson, Anders Forsman, Stefan E. Karisch, Niklas Kohl and Allan Srensen (http://www.lionhrtpub.com/orms/orms-4-05/frairline.html)

The airline industry experiences very challenging times, and many airlines need to undertake substantial changes to their business processes to get back to profitability. Because an airline's operations are generally considered a cost driver, carriers emphasize cost-effectiveness to make improvements to the bottom line. This push towards cost savings is supported by the use of optimization systems that improve the utilization of scarce and expensive resources such as aircraft, crew, gates, etc. To maximize the benefits of resource optimization, one needs to identify, model and solve the right operational problems. Once this is done, it is crucial to maintain flexibility and adjust to changes in the business environment. The challenge of problem solving is to find the best possible solution, to the right problem, as fast as possible. These three dimensions are all essential to the success of optimization systems. Depending on the environment, some dimensions are emphasized more than others, and some might even be neglected. Often, the second dimension - the right problem - is neglected in scientific work related to real-world problems such as those arising in the airline industry or other complex transportation systems. There are many planning and operations problems at airlines for which, due to their complexity, detailed and accurate modeling is required to obtain useful and efficient solutions. At the same time there is continuous change in an airline environment, e.g., modifications to flight schedules, changes to agreements and disruptions of plans. Other characteristics that make these airline optimization problems challenging and important are their size, the amount of data involved and their impact on the profitability of an airline.

22

A large airline operation comprises several hundred aircraft, 10,000 or more air crew members, up to 100,000 flights per month and tens of millions of passengers per year. Obtaining continuously the best possible solution to the right problem is hence crucial for an airline to stay competitive and survive. Figure 1 provides a simplified, high-level overview of the operations of an airline and gives a typical timeline for the different planning steps. First, the schedule (or timetable) is produced. Here the objective is to match marketing expectations with available fleets and constraints on the network. The second step in airline planning is the allocation of aircraft to the flights. This step involves determining the right type and size of aircraft for each flight leg in order to maximize the expected profit and constructing aircraft rotations that satisfy operational constraints and maintenance requirements. Once this is done, the crew needs on each flight are known and become input for the crew-scheduling problem.

Figure 1: Major phases in airline operations Crew scheduling is traditionally done in two steps. First the crew-pairing problem is solved, where flight legs are sequenced into anonymous crew rotations (or trips) such that the crew requirements on each leg are satisfied, crew and other costs are minimized and contractual, and operational constraints are met. Then, in the crew-rostering problem, these trips and other activities are assigned to individuals, thereby building personal rosters. On the day of operations, all resource areas in an airline (aircraft, crew, gates, etc.) need to be controlled, and in case of disruptions, plans need to be repaired. Operations research has been very successfully applied to problems arising in airline operations. (For more details, see the recent survey on applications of O.R. in the air transport industry by Barnhart et al. [1].) Traditionally, the size and complexity of airline operations required the decomposition of the problem into more manageable subproblems that are by themselves nontrivial planning or scheduling problems. A global modeling tool can provide at least consistency between these sub-problems and support their eventual integration to avoid sub-optimization. Besides the complexity of the problems arising in airline operations, there is the additional challenge of not being able to model, for example, a 200-page specification of a particular operations problem reasonably accurately. First of all, such a specification usually does not exist. And even if it did, any specifications and the resulting models would have to be revised and thought through again, and new hypotheses tested and verified. Hence, it is essential in a business environment in general, and the airline industry in particular, to be able to support a process with fast iterations to get as close as possible to the real problem and then subsequently adapt to changes in the future. Modeling Systems and Rules Engines

The model is the foundation of an optimization application, especially when used for solving real-life problems. A common approach in problem solving is the separation of the problem definition (model) from the problem solution to allow the user to focus on the modeling. General-purpose modeling systems have been continuously developed for large-scale optimization. Many different commercially available modeling languages and rule systems support this problem-solving paradigm of separating problem definition and solution. The NEOS Guide on Optimization Software lists at least 15 such modeling systems [2]. In comparison, the number of commercially available rule systems is smaller. Besides following the problem-solving paradigm, modeling and rule tools also provide transparency for the business users and allow them to change models and rules, thereby making optimization techniques more accessible for non-experts. Using these tools, end-users can systematically explore, analyze and evaluate their models and the underlying optimization problems. Providers of modeling and rule systems have over the years also increased the usability of their systems, including application development tools and environments. These efforts have made business modeling less exclusive and more accessible to technically unskilled end-users who can now use optimization systems effectively. These strategic opportunities have already been described (see, for example, [3] and [4]). In an airline environment, there is a need to combine large production optimization with the prototyping power of modeling systems. To our knowledge none of the commercial modeling or rule systems listed in the NEOS Guide on Optimization Software is used for production planning and scheduling at airlines or in other large transportation systems. For more than 14 years, Carmen Systems has developed a combined modeling and rule system, Carmen Rave (Rave stands for "rule and value evaluator"), which is currently deployed by 20 airlines and three railway companies. The Carmen Rave language is a purpose-built rule and modeling language tailored for resource management and optimization problems in the transportation industry. A combined rule and modeling tool allows the implementation of costs, definition of objective functions for optimization, expression

23

quality aspects, etc. In other words, a combined tool allows a user to model all the characteristics of a particular application and describe them in rule code. This maps knowledge and expertise of the users into the rules engine and transforms an optimization system into an expert system. Preserving knowledge and expertise is crucial for consistency and continuity and can not be achieved in a single iteration. Such a task needs continuous effort over time and a modeling tool that allows continuous adaptation to reflect the real-world problem characteristics. Furthermore, users of optimization systems should be able to analyze their environment to increase business knowledge and expertise. The creation of many what-if scenarios and simulations is necessary in this context. By supporting the co-existence of multiple rule sets, creating new scenarios can be done easily and quickly. Our business experience of selling optimization systems bundled with a combined rule and modeling system is that new customers will never compromise performance for flexibility. It is the way the optimization market works. If a major airline wants a new optimizer for their crew scheduling, it will invite several vendors and evaluate the quality of their solutions to a predefined benchmark problem. One percent difference in solution quality can mean $30 million of annual savings for the customer, so it is almost impossible for the buyer to justify anything other than the best quality solution, even if the difference is just a small fraction of a percentage point. During the development of Carmen Rave, we had to overcome several technical difficulties. It was a challenging marriage between computer sciences and O.R. - every language feature had to be designed carefully to meet performance requirements. Techniques like dynamic sorting, caching, scope identification, partial evaluation and other program transformations were developed to this end. Planning problems and real-time problems require different techniques. A planning problem may take several hours of CPU time to solve, so it pays off to build large cache tables of precalculated rule values. Analysis of rule call patterns has been done carefully for many different use cases. There is no standard solution to the performance problem of rule evaluations, so the run-time system of Rave has evolved to a hybrid of many algorithms. Practical Experience

Rave is used with all Carmen applications and also is available as a stand-alone product to integrate in third-party software. As indicated above, more than 20 transportation companies use the system as an integral part of Carmen's suite of resource optimization products. These companies include Air France/KLM, British Airways, Delta Air Lines, Lufthansa and Northwest Airlines - five of the 10 largest airlines worldwide - as well as Deutsche Bahn (German Railways), one of the largest passenger transportation companies in the world. The rule modeling language allows the users at these companies to interact with optimization systems by defining rules, objectives and targets rather than modifying flight, crew or passenger records. One of Carmen's first clients was Scandinavian Airlines (SAS). In 1999, SAS decided to introduce a corporate rule system to be used within all crew management systems, regardless of vendor and technical platform. SAS had previously used Carmen Rave as a modeling and legality system with the Carmen crew-pairing system. The strategic objectives for using a corporate legality system were:

centralized rule maintenance by users, automatic distribution of changes, consistent rules and interpretations at all times in all systems, and advanced simulation and test facilities.

Since the beginning of 2002, Rave controls all crew management systems at SAS, including the real-time tracking system and various optimization applications. SAS considers the ability to simulate and quickly respond to change as key to the survival of any airline. With a corporate rule system, SAS can constantly adjust to changes in the operation and can also perform simulations to analyze the consequences of potential modifications in the critical crew management systems. The complexity of crew management at SAS is staggering. A total of 1,700 pilots and 4,800 flight attendants are scheduled with different crew management systems. Crewmembers are positioned in bases in three countries - Denmark, Norway and Sweden and each country has its own set of national regulations. Crewmembers are also part of different unions that follow differing agreements. In addition, both pilots and flight attendants can work in several positions and belong to different qualification groups. The overall objective is to use all available crew resources to operate the domestic and international flight schedule of SAS as effectively and efficiently as possible. Last but not least, quality-of-life considerations for individual crewmembers are also a priority for SAS. Besides meeting the strategic objectives listed above, SAS has also fulfilled the following business goals when implementing a corporate legality system:

Cost reductions. For example, operation and maintenance costs in the crew legality and composition area have been

24

reduced by more than 40 percent.

Time to market changes and development. For example, new or changed agreements and rules are now implemented in hours or days instead of weeks or months, respectively. Minimization of risk. For example, independence of vendors and other key persons has been achieved due to transparency of rule code.

The use of the Rave modeling language at SAS is a good example of the application of the problem-solving paradigm in a complex airline environment. Being able to model problems in accurate detail is necessary to get the full benefits from optimization and decision-support systems. In general, airline clients estimate that the modeling capabilities provide them with additional 2 percent savings in crew costs on average, with some estimating savings of up to 5 percent. These savings are related only to the rules and modeling system and are achieved on top of the benefits delivered by the optimization systems. For an airline the size of SAS, 1 percent of crew cost savings corresponds to around $5 million a year. Additionally, SAS saves several million dollars in using a corporate legality system by meeting the business goals listed above. A rule system has become a necessity for SAS to maintain the large number of labor rules and regulations that are very complex compared to industry standards. In the airline's ambition to automate and optimize the crew planning process, a global and efficient rule and modeling language has become essential. Therefore, modeling has become a strategic core competence within the crew management operations of SAS. Conclusion

End-user modeling and large-scale resource optimization can be successfully married for solving large and complex problems arising in airline operations. The combination of optimization and modeling power in the client's hands can be viewed as the real contribution of a modeling system. Being able to model problems in detail and accurately is necessary to get the full benefits from optimization and decision-support systems. There are a number of business problems SAS and other airlines solved that they would not have been able to solve without a modeling system. Through using a combined rules and modeling engine, operations and maintenance costs can be reduced, adjustment times to market changes shortened, and a higher level of quality and consistency achieved. The contributions of such systems to an airline's bottom line can be significant. Most importantly, however, such a modeling tool gives the power of operations research to the users at large transportation companies - to people who were not initially O.R. specialists.

Carmen Rave: Rule Modeling Example The Carmen Rave language is a purpose-built modeling language tailored for resource management and optimization problems in the transportation industry. The language is a declarative programming language in which all objects belong to a level hierarchy that must contain a chain level (e.g. a crew roster) and an atomic level (e.g. a flight or so-called leg). The rule programmer can introduce intermediate levels (e.g. duties and trips). For each level, attributes can be defined using other attributes and a number of built-in or userdefined functions and operators. Basic building blocks are the keywords that are predefined attributes of the leg or the chain level. There are also so-called aggregators (such as sum and max) and specifiers (such as first and next) to make it possible to construct complex expressions. The aggregators remove the need of complex recursive functions that potentially could cause infinite recursion problems. Below is a small example of a rule ensuring that the connection time between two legs or flights is at least 30 minutes. Attributes that are derived from keywords must start and end with "%", and, in the example, duty is a user-defined intermediate level. %connection_time% = next(leg(duty), departure) - arrival ; rule connection_time_ok = %connection_time% > 0:30; end As a consequence of the declarative nature of the language, the rule programmer never needs to

25

consider when or on what level a rule needs to be checked. This is automatically derived from the rule description. Furthermore, the Rave compiler and runtime system has the important responsibility of communicating the model efficiently to the optimizers. This way, the rule programmer can focus on what he/she knows the best - keeping the model as close to reality as possible. The rule source code is compiled into C code and dynamically linked to the decision support or optimization system. This makes it possible for the user to maintain and modify the problem description without downtime of the application.

Figure 2: Example of level hierarchy.

Carmen Rave: Architecture and Deployment Carmen Rave is embedded in an airline IT environment. The rule and modeling system interacts with other systems through a high-performance API or a message interface for plug-in transaction systems. This follows on from the pattern of separating business rules from standard systems. In airline operations, one distinguishes two application areas, namely planning and operations. Products in these two areas distinguish themselves mainly through the type of data updates and their response times. While planning products receive batch updates of data and have response times in the order of hours, operational products get real-time updates, and their response time is expected to be in the order of seconds.

Figure 3: Directly linked Rave and the Rave Server architecture. To meet these requirements, Rave can be employed in the following ways. In planning, Rave is usually directly linked to an application, thereby providing a high-performance API for largescale optimization. Data that Rave needs for legality checks is accessed on demand directly from the application. For operations products, Rave is usually linked to thinner clients via messaging interfaces using a Rave Server as depicted in Figure 3. The thin clients only contain a small amount of planning data, while the bigger part of the data needed for legality checks is sent as references in the XML messages and loaded by the Rave Server. An integrated development environment, the Rave IDE, is the interface to the rule developer. It

26

is a graphical tool that provides one interface for centralized management of rules and objectives. The Rave IDE allows the user to edit, compile and navigate in the source code. The navigation capabilities are especially useful when working with the large amount of rules found in real-world applications. The Rave IDE can also provide visualization of a rule or cost function evaluation. The evaluated values are displayed synchronized with the source code, which aids the rule developer in his understanding. It can be used as a debugging tool, as well.

Erik Andersson (erik.andersson@carmensystems.com) is co-founder and CTO of Carmen Systems. Anders Forsman (anders.forsman@carmensystems.com) is product manager of Carmen Rave at Carmen Systems and is responsible for the development of Rave. Stefan E. Karisch (stefan.karisch@carmensystems.com) is vice president of Operations Research at Carmen Systems. Niklas Kohl (niko@dsb.dk) was a senior consultant at Carmen Consulting and is now manager of IT and Optimization in the planning department of the Danish State Railways (DSB). Allan F. Srensen (allan.sorensen@sas.dk) is head of IT at Scandinavian Airlines Denmark A/S with more than 20 years experience within IT and crew management systems. References

1. 2. 3. 4.

C. Barnhart, P. Belobaba and A.R. Odini, "Applications of Operations Research in the Air Transport Industry," Transportation Science, Vol. 37, No. 4, pgs. 368-391, November 2003. NEOS Guide: Optimization Systems/Modeling Languages, http://wwwfp.mcs.anl.gov/otc/Guide/SoftwareGuide/Categories/optsysmodlang.html T.A. Grossman, "Spreadsheet Modeling is a Strategic Opportunity," OR/MS Today, October 2003.

R. Fourer, "Software for Optimization," OR/MS Today, December 1998.

Knowledge Base
Operations Research For Everyone (including poets) - (ORMS Today, August 2005, Volume 32, Number 4)

Operations Research For Everyone (including poets) - (ORMS Today, August 2005, Volume 32, Number 4)
Published by Peter C. Bell on 30/01/2007

27

Seven really useful O.R. frameworks that can be effectively employed by anyone. Operations Research For Everyone (including poets) Seven really useful O.R. frameworks that can be effectively employed by anyone. By Peter C. Bell

Operations researchers have now been at it for more than 65 years and have developed a huge body of knowledge that many of us believe is really useful. However, although we can certainly argue that operations research has improved living standards by reducing the cost of many everyday items, most would agree that we have had little effect on the way that ordinary people live their lives. Resistance to having our materials more broadly adopted comes from the fact that O.R. relies on mathematics for its major impact, and the world is not, by and large, populated with mathematicians. In business schools we routinely face classes that contain both engineers and "poets," and we must include materials in our courses of value to both groups (and everyone in between). From my experience in this environment, I have concluded that "O.R.: the science of better" has much to offer that could improve the lives of everyone, including people who will not or cannot understand mathematics. To reach everyone, however, we have to promote O.R. topics that remain valuable when taught without the math. Luckily, these topics are not hard to find. Most business school faculty would agree that their courses prepare students to analyze complex business problems, but many of their courses contain little mathematics or statistics. How can "strategic analysis" or "market analysis" be taught without spreadsheets, statistics or mathematics? The answer lies in recognizing that "analysis" as understood and practiced by a very large number of very bright and highly successful people is often completely qualitative. Often a "rigorous analysis" of a complex problem involves a deep, thoughtful discussion centered on some form of framework that provides a common perspective and vocabulary to guide the participants' thinking. For example, the "SWOT" (strengths, weaknesses, opportunities, threats) framework represents a state-of-the-art approach for resolution of a complex decision problem for many people, although SWOT provides no algorithm to help you actually choose an alternative. Similarly, the Porter strategic model and the Balanced Scorecard provide qualitative frameworks that have been shown to help intelligent people resolve complex decision situations successfully, but neither includes any form of decision algorithm. The O.R. body of knowledge is built upon a set of frameworks that enable O.R. people to first model and then resolve complex decision issues. We O.R. teachers are prone to spend most of our class time with modeling and algorithmic details, and consequently we may not spend enough time emphasizing the basic frameworks that allow us to build the models and algorithms in the first place. Knowledge of these frameworks can be highly beneficial to real world decision-makers, even if they don't build the model or run the algorithm.

28

Based on my experience teaching O.R. to both engineers and poets, here is my list of seven really useful O.R. frameworks that can be effectively used by anyone. 1. How to Make a Good Decision

The ability to make good decisions is one of life's most valuable skills and is a key success factor for both personal life and business. Managers become identified by their decision-making ability - those who make great decisions get promoted, while those who consistently make poor decisions struggle. O.R. includes a huge body of knowledge on how to make great decisions, but few people have been exposed to these ideas. The O.R. decision-making framework includes many important concepts. For example, if you are going to make a choice, you need to think about what alternatives you have to select from, what criterion (or criteria) are affected by your choice and what value you attach to these, how risky are the outcomes and how much risk you are prepared to tolerate, and how to handle the ever present trade-off between return and risk. The O.R. decision-making framework also recognizes that not all decisions are simple "choose an action and live with it" situations. Many decision situations involve sequences of choices and uncertainties, where decision-making about future choices conditional on observed outcomes is critical to understanding current choices. In such situations, an understanding of contingency analysis is helpful. What do I do if this happens? When should I change my future conditional decision? It may be important to understanding the timing of sequential decisions and to delay making commitments until you have all the available data. Most O.R. people will see this as pretty simple stuff and will want to start building the tree, estimating utility functions and computing expected values, but we forget that most of the world has not been exposed to these central ideas. For these people, building the tree may not be so important; the value of the analysis may come from using this framework to think more deeply about the alternatives, criteria, and the risks and potential returns. Once this is done, a great many decisions become pretty straightforward. 2. How to Tell a Good Decision from a Bad One

Most of the world believes that good decisions are ones that turn out well while decisions that turn out poorly were bad decisions. We O.R. people know that sometimes good decisions turn out badly, and bad decisions turn out well; there is both good luck and bad luck. Since we cannot assess the quality of a decision by its outcome, and yet we value people who make good decisions, we can be a step ahead if we have a framework that enables us to tell a good decision from a bad one, or a good decision-maker from a chump. O.R. recognizes that good decisions - whether they turn out well or poorly - are made by following a sound decision-making process. O.R. has many decision-making process frameworks that enable us to critically review the process that led to a decision. Most of these include at least three important steps. The first step has several names ("pathfinding," "thinking outside the box," etc.) but involves spending some time generating alternatives: What could we do? This step provides an opportunity to be creative without being shackled by the status quo, but it is also important to be exhaustive; it is hard to decide to do something that you did not think was a possibility. The second step involves analysis of the alternatives dreamed up in the first step in order to arrive at a small set of decisions that are implementable. "Analysis" comes in many forms. "SWOT" is state-of-the-art for many people, but most operations researchers would see this as very light. A more satisfactory analysis would address the issues raised in "How to make a good decision" (above). Importantly "analysis" almost always involves making simplifying assumptions, and consequently we recognize that the outcome of "analysis" should not be a decision, but rather a set of recommendations for possible decisions. The final step in a sound decision-making process is to step back and consider the recommendations of the analysis under a real world lens. In the business this would involve management review and then action. This review may involve reviewing tradeoffs among criteria (including assessing the risk-return tradeoff among the various recommendations), or assumptions about uncertainties, or deciding which recommendation best aligns with management's objectives or corporate strategy. This simple decision-making process framework provides a whole set of intelligent questions anytime someone proposes a course of action. What alternatives did you consider? What analysis did you do that leads you to conclude that this is a sound course of action? What assumptions were made in the analysis? How does your recommendation align with our real-world objectives? It also provides fodder to challenge some of the popular "buzz" of the day; for example, does "thinking outside the box" imply "acting outside the box"? 3. How to Cope with an Uncertain Future

Whether we like it or not, most decisions are made about the future, so decision-makers must almost always cope with uncertainties. O.R. includes several frameworks that enable us to model uncertainty, which also enable us to examine how effectively other people cope with an uncertain future. We can separate people who ignore uncertainty by pretending that

29

averages will happen (exposing themselves to Sam Savage's "Flaw of Averages") from others who construct "best case" and "worst case" scenarios by combining the optimistic or pessimistic outcomes for all the stochastic events (and thereby often make their decisions based on scenarios that could well have a vanishingly small probability of actually occurring). Stepping up from these nave approaches requires a more detailed framework for thinking about uncertainty, and the O.R. body of knowledge provides a valuable set of concepts and tools that add value to any decision situation. We start with uncertain "events," which we can list to identify the sources of uncertainty. Some events are simple: Will it rain tomorrow? Others events sound complex: Will our product pass our market research trial and how many will we sell over a five-year period? These can usually be broken down into sequences of simpler events that are easier to understand. Probabilities provide a way of representing what we think will happen at each event and can be accessed without doing any math. We can use our skill and knowledge to assign subjective probabilities or we can ask someone more knowledgeable about the event than us to provide probabilities. If we really want to know the probability of rain tomorrow it's not a bad idea to ask a weather forecaster. Many people do not like the idea of subjective probabilities, and there is an alternative. You can collect some data and extract probabilities from this data. Extracting rough probabilities from data does not always require a degree in statistics or operations research. A jewelry store here in town offered a 100 percent refund on all pre-Christmas purchases if there was more than 15 centimeters of snow on Jan. 7. Everyone wanted to know whether this offer was worth anything. Daily snowfall data is available on the Web, and some counting is all that is required to come up with pretty good probabilities. Listing relevant events with their probabilities provides the basis for a deep and rich discussion of the uncertainties in the future. Such a discussion will produce an appreciation of the possible risks and rewards, will identify sources of disagreement among multiple parties to the decision, and will improve most people's decision-making. 4. How to Prosper in Risky Situations

Recognizing the opportunities that risk provides is a way of achieving personal and professional goals, yet when we expose students to risky situations in the classroom, they overwhelmingly decide to avoid taking on risk, even in situations where the expected value is quite positive and even when playing with corporate funds. Corporations value risk takers who can prosper by recognizing the upside of risky situations, and O.R. provides tools to help people overcome their aversion to downside risk. The "three Ms" of risk provide a framework that helps people to understand how to take on risk and capture the upside. First, risk must be measured so we understand what we are dealing with. A risk profile that lists all the possible outcomes with their probabilities provides the most complete specification of the riskiness of a situation, although other summaries (such as the mean and standard deviation of returns) are also useful. A risk profile can be put together subjectively as a discussion item, although we would like to see more people using a spreadsheet with random numbers to move from their knowledge of elementary events to a risk profile for a more complex outcome. Second, risk can usually be mitigated; if we like the upside but not the downside, there are usually steps that we can take to reduce our downside probabilities. These might include selling off the risk to someone else (for example, by buying insurance), taking on a partner to share the risks and rewards, designing a trial so that preliminary results are obtained for a much reduced cost or diversifying. Diversification is a powerful risk mitigation tool and not just in putting together a financial portfolio. The firm that takes on only a single research-and-development project might be exposed to high downside risk, but the firm that takes on 20 such projects is going to come away pretty close to the expected return. The third "M" of risk is management. Managers exposed to serious downside risk don't just sit there and await fate. They monitor the situation very closely and do everything they can to change the probabilities or the payoffs in their favor as events unfold. This might involve spending additional funds to ensure a desired outcome occurs or cutting a trial short if preliminary results suggest it is not going to work out as expected. When facing a risky decision, everyone should ask three questions: What are the risks? How can I mitigate the risks? How can I manage the risks? Thinking through the answers to these questions in advance will improve everyone's decision-making. 5. Recognizing and Exploiting Simultaneous Decision Situations

Most people outside the O.R. community believe that all complex decisions can be broken down into a series of simple decisions that can be taken sequentially to achieve a good solution. The O.R. body of knowledge recognizes that this approach can be expensively sub-optimal, even if you take the additional step of iterating a few times to ensure that the early decisions remain consistent with the later ones. The O.R. framework for identifying and solving simultaneous decision problems, which we usually call "optimization," involves identifying the decisions to be made, finding an objective that enables alternate solutions to be compared, and recognizing the constraints that limit the range of implementable solutions. This is one case, however, where the framework also includes recognition of the fact that we humans are very poor at solving complex simultaneous problems intuitively. There is certainly value in laying out any decision problem using this framework, but we recognize that the big advantage in the case of simultaneous decision problems comes from using a tool (such as Excel Solver) to actually "do the math" and solve the problem. Those who understand the simultaneous framework and can recognize the kinds of problems where a sequential decision

30

approach fails, and who know a better approach, have a competitive advantage in the marketplace. A great many people, including students of mine, have used the standard Excel Solver to save thousands of dollars for their employers. 6. Revenue Management

Revenue management (RM) has had and is having a dramatic effect on the way firms price their products and make these products available to the marketplace. However, the human impact is even greater; most people who have been exposed to highly variable pricing or restricted supply are frustrated because they do not understand the revenue managed marketplace. An understanding of the basics of RM pricing and a framework that ties the various tools (such as overbooking, trading-up, discount allocation, short-selling and reservation levels) together is a professional and social winner and will also help with personal shopping. Knowing how to maximize revenues while selling the same quantity of product is also a business winner. [More on the concepts of RM can be found at "Revenue Management for MBAs," OR/MS Today, August 2004, pp. 22-27. (http://lionhrtpub.com/orms/orms-8-04/frbell.html)] 7. How to Link O.R. to Corporate Strategy

The ascendancy of O.R. to an important role in the business world will require that highly paid and highly intelligent managers come to understand the strategic value of O.R. to their organizations. Many of our senior "C-level" executives are not quantitatively trained and see management as an art rather than a science. Convincing these executives of the strategic value of O.R. so that they will invest in O.R. work requires a framework that links O.R. to strategy. A basic framework describes four ways to link O.R. and strategy. First, since the primary impact of a successful business strategy is that it creates a competitive advantage that is sustainable over a period of time, O.R. work that creates and maintains a competitive advantage is strategic to the corporation. There are many well-documented examples of such "strategic O.R." in the literature (see Bell, Anderson, and Kaiser, Operations Research, Vol. 51, No. 1, 2003). Second, O.R. can be linked to strategy by providing assistance with the resolution of decisions that are strategic to the organization. Third, a number of organizations have had an O.R. group that for some time was comprehensively involved in their organization's decision-making. Examples include the O.R. groups at FedEx and San Miguel corporation, the decision technologies group at American Airlines before the spin-out of Sabre, and the "Global Analytics" group at Procter and Gamble. Finally, a number of firms market O.R. products, and for them, nurturing their O.R. capability is a critical part of their business strategy. Examples include firms that market O.R. tools (e.g.: ILOG, Frontline Systems), firms that market solutions that include serious O.R. algorithms (Giro, Aspen Technologies, Visual8) and firms that provide O.R. consulting services. Much of the real-life impact of O.R. arises from the application of our basic frameworks in a thoughtful way, rather than from building sophisticated models or performing complex calculations. Everyone, including the poets, can achieve significant benefits by using these frameworks to help them make decisions and to contribute meaningfully in an environment where complex decisions are made. The business world believes that SWOT or the Porter model represent the leading edge of business analysis, but people armed with these seven basic frameworks (in addition to a familiarity with SWOT and Porter) ought to be able to do better. We should seize the opportunity to put our basic frameworks out there. If we are successful, we will improve people's lives, enhance our students' promotion prospects and help further develop "O.R.: the science of better."

Peter C. Bell is a professor at the Ivey School of Business, University of Western Ontario

Knowledge Base
Making Skies Safer (ORMS Today, October 2005, Volume 32, Number 5)

Making Skies Safer (ORMS Today, October 2005, Volume 32, Number 5)
Published by Laura A. McLay, Sheldon H. Jacobson and John E. Ko on 30/01/2007

31

Applying operations research to aviation passenger prescreening systems. Making Skies Safer Applying operations research to aviation passenger prescreening systems. By Laura A. McLay, Sheldon H. Jacobson and John E. Kobza

The terrorist events on Sept. 11, 2001, will forever alter the way our nation views aviation security. The article by Barnett (2001) in OR/MS Today highlighted numerous important questions and issues surrounding the events of that day and how air travel has been and will continue to be affected. Four years later, aviation security systems have undergone significant changes, though the analysis of such systems continues to lag well behind their actual operation. Operations research provides a unique set of methodologies and tools for designing and analyzing aviation security systems, since the foundation of operations research is based on applying analytical methods to optimally allocate and use scarce assets in making better informed decisions. The purpose of this article is to provide a brief survey of aviation security system applications that have been used or are well positioned to benefit from operations research modeling and analysis techniques. The research efforts discussed apply operations research methodologies to address problems in the area of passenger prescreening, an important and highly visible aspect of aviation security operations. Three specific issues are highlighted: identifying performance measures, analyzing how passenger prescreening systems can fail or succeed, and designing effective passenger screening systems. Over the past four years, there have been numerous changes to all aspects of aviation security systems, all designed to prevent a reoccurrence of the events on Sept. 11, 2001. Some of the changes include reinforcing cockpit doors, expanding the federal air marshal program, allowing only ticketed passengers to enter the enplane side of airport terminals, using bomb-sniffing dogs and screening all checked baggage for explosives. Many of the changes implemented have been politically driven - they have been a direct result of the "kneejerk" emotional response to Sept. 11, rather than from any coordinated, systematic analysis and planning. For example, within two months after the attacks, the United States Congress mandated 100-percent screening of checked baggage by a federally certified screening device or procedure by Dec. 31, 2002, as part of the Aviation and Transportation Security Act. Prior to Sept. 11, only a small fraction of checked baggage was screened in this manner. The rapid deployment of explosive detection devices in order to meet this deadline resulted in several billion dollars being invested before any type of systematic analysis of baggage screening security systems was performed. Operations research provides methodologies that can be used to determine how taxpayer dollars can be optimally spent and how security system assets can be optimally used. Passenger Screening and Prescreening

32

There are two basic approaches to passenger screening: uniform screening and selective screening. From the introduction of passenger screening in the early 1970s until 1998, a uniform screening strategy was used, whereby all passengers were screened in the same manner. During this period, passengers were screened by X-ray machines, and their carry-on baggage was screened by metal detectors. The main argument for uniform screening is that all passengers should receive the highest level of screening since anyone could pose a threat. In contrast, a selective screening strategy targets additional security resources on a few passengers perceived as being of higher risk. The main argument for selective screening is that directing expensive security assets toward fewer passengers may be more cost-effective since most passengers do not pose a threat to the system. Passenger screening systems can be designed to detect items that are a threat or passengers who are a threat. Through the use of X-ray machines and metal detectors, the passenger screening systems currently being used in the United States focused on detecting items that are a threat. Although this does not prevent terrorists from boarding airplanes, detecting threat items removes the tools that can be used to stage an attack. The Transportation Security Administration (TSA) has pursued the notion of detecting passengers who are a threat by coupling selective screening systems with a passenger prescreening system, an automated computer system that performs a risk assessment of each passenger prior to their arrival at the airport. If such a system is used, how the passengers are screened at the airport is a function of their assessed risk. In 1998, a selective screening system was implemented that used a computer-aided passenger prescreening system (CAPPS) that selected passengers for additional screening. CAPPS was designed to eradicate human bias in the risk assessment decisionmaking process. Those passengers who were cleared of being a security risk were labeled nonselectees, while those who could not be cleared of being a security risk were labeled selectees. The main screening difference between these two classes of passengers is that checked bags of selectees were screened for explosives. Although the exact information used by CAPPS is classified, reports in the popular press indicate that it used information provided at the point of ticket purchase, including demographic and flight information, frequent flyer status of the passenger, and how the passenger purchased their ticket. CAPPS has been in use since 1998. After Sept. 11, aviation security moved in the direction of uniform screening with the enactment into law of the 100-percent checked baggage screening mandate, which eliminated the distinction between selectees and nonselectees. The TSA revisited selective screening policies through the development of CAPPS II, a refinement of CAPPS. However, on July 14, 2004, the TSA announced that CAPPS II would not be implemented due to privacy concerns, despite having invested $100 million in its development. Shortly thereafter, the TSA announced plans to replace CAPPS II with Secure Flight, a passenger prescreening system akin to CAPPS II, which partitions passengers into three risk classes: selectees, nonselectees and a third class of passengers who are not allowed to fly. This third group is extremely small and is,in part, based on FBI watchlists. Cost-benefit analyses of different baggage screening strategies provide a method of assessing and comparing the value of such approaches. Virta et al. (2003) perform an economic analysis capturing the tradeoffs of using explosive detection systems (EDSs) to screen only selectee baggage versus screening both selectee and nonselectee baggage (i.e., the 100-percent baggage screening mandate). They conclude that the marginal increase in security per dollar spent is significantly lower for the 100-percent baggage-screening mandate than when only selectee bags are screened. Jacobson et al. (2005) incorporate deterrence into this model (one of the indirect benefits of screening both selectee and nonselectee baggage), based on a remark by the inspector general of the United States Department of Transportation, and conclude that the cost effectiveness of the 100-percent baggage screening mandate depends on the degree to which it can reduce the underlying threat level. Barnett et al. (2001) perform a large-scale experiment at several commercial airports in the United States to estimate the costs and disruptions associated with a positive passenger baggage matching policy (PPBM). Under PPBM, unaccompanied checked baggage is removed from aircraft on originating flights. PPBM can be applied to all or a portion of checked baggage. The findings of Barnett et al. (2001) counter predictions by the airlines that using PPBM would be expensive and result in widespread delays when used on all checked baggage. They found that on average, one in seven flights experienced a delay, with each such delay averaging approximately seven minutes. Identifying Performance Measures

Based on the number of aviation security changes that have been implemented since Sept. 11, 2001, and the fierce political and public debate surrounding these changes, it has become apparent that it is a challenge to define what good aviation security is. Identifying performance measures of interest is not only important for long-term planning of security systems, but also for efficiently managing day-to-day operations and effectively managing security systems in transition. These performance measures can be incorporated into various types of passenger screening problems, including applications in discrete optimization models, applied probability models, cost benefit analyses and risk assessments. Since Sept. 11, 2001, much of the interest in passenger screening systems has been limited to reducing the false clear rate - the conditional probability that there is no alarm response for a threat passenger or bag. An alternative is to reduce the false alarm rate - the conditional probability that there is an alarm response for a nonthreat passenger or bag. The false clear and false alarm rates cannot be simultaneously minimized (Kobza and Jacobson 1997). For example, if all passengers were allowed to board their flights with no screening, the false alarm rate would be 0 percent while the false clear rate would be 100 percent. Since the vast majority of passengers are not threats, most alarms are in fact false alarms. A system with a low false clear rate may have a large false alarm rate, which can be very expensive, since there must be secondary screening procedures in place to

33

resolve such alarms. In rare cases, the bomb squad must inspect a suspect bag or an airport terminal must be shut down for several hours, resulting in millions of dollars in losses to the airlines for a single false alarm incident. Other performance measures deal with passenger screening systems in transition. When CAPPS was used to determine which checked baggage was screened for explosives between 1998 and 2001, there was an insufficient number of baggage screening devices available in many of the nations airports to screen all selectee bags for explosives. This partial baggage-screening problem has not been made obsolete by the 100-percent baggage-screening mandate following Sept. 11. It models any such scenario when a new screening technology has been partially deployed and is used under a selective screening system and, because of limited capacity, not all selectees can be screened by the new technology. These performance measures focus on the types of risk that can be reduced by a single screening technology or a series of screening devices working together in a system. There may be other types of risks on a flight that are not considered by these performance measures. Fully utilizing baggage-screening devices is one possible performance measure for the partial baggage-screening problem. Intuitively, it is equally desirable to screen additional checked bags such that the new screening devices are being used up to their capacity. Jacobson et al. (2003) introduce two alternate performance measures that capture risk across a set of flights and incorporate them into discrete optimization models. The measures are considered for a set of flights carrying both selectee and nonselectee baggage. A flight is said to be covered if all the selectee bags on it have been screened and cleared. One measure considers the total number of covered flights. Optimizing over this measure minimizes the number of flights that may be subject to a particular risk. Another measure considers the total number of passengers on covered flights. Optimizing over this measure minimizes the total number of passengers on flights that may be subject to a particular risk. Note that by optimizing over these measures, the utilization of the baggage screening devices is indirectly maximized, though depending on which measure is chosen, the security of the system can be determined to be optimal in two distinct ways, putting either fewer flights at risk or fewer passengers at risk. Analyzing Selective Passenger Screening Systems

Aviation security professionals have expressed concern over the actual effectiveness of selective screening systems like Secure Flight in preventing attacks, given the variety of ways in which such systems can fail. Three research efforts are highlighted to illustrate how operations research tools such as risk analysis, algorithm design and applied probability can be used to analyze the flaws in selective screening systems. A weakness of any selective screening system is that it may be possible to game it through extensive trial-and-error sampling. At present, passengers are aware of whether they have been classified as selectees or nonselectees each time they travel (most notably, by an indicator on their boarding pass, as well as by the additional screening attention they receive at the security checkpoint.) Terrorists can exploit this information to determine how they are most likely to be classified as nonselectees by flying on a number of flights and effectively sampling the characteristics that result in a nonselectee classification. Therefore, terrorists do not need to understand how the pre screening system works; they merely need to be able to manipulate the prescreening system to get the desired result (i.e., be classified as nonselectees). Chakrabarti and Strauss (2002) present this strategy as the "Carnival Booth" algorithm, which demonstrates how a system using prescreening may be less secure than systems that employ random searches. Another weakness of any selective screening system is its dependence on passenger information to accurately assess passenger risk. The specific details underlying the currently used selective screening system are classified. Moreover, it is not clear how such a system will correctly identify terrorists as selectees when compared to random screening. It is also a challenge to accurately assess whether a selective screening system has been effective, since terrorist attacks are rare events, and how terrorists behaved in the past may not be predictive of how terrorists will behave in the future. Barnett (2004) uses risk analysis, applied probability and data mining to analyze these issues regarding pre screening systems. He concludes that using a prescreening system such as Secure Flight may improve aviation security under a particular set of circumstances, namely, if it does not reduce the screening intensity for nonselectee passengers, if it increases the screening intensity for selectees, and if the fraction of passengers identified as selectees does not decrease. For all these reasons, Barnett (2004) recommends that Secure Flight be transitioned from a security centerpiece to one of many components in future aviation security systems. The TSA developed the Registered Traveler Program to use in conjunction with Secure Flight. The program is designed to avoid "wasting" security resources on extremely low-risk passengers. To enroll in the Registered Traveler Program, a passenger must pass a voluntary background check and submit biometric information for identity verification when traveling. Once part of the program, these passengers undergo expedited screening in designated security lanes. Barnett (2003) outlines several potential problems with such a program, and suggests that in the worst-case scenario, the Registered Traveler Program improves screening efficiency without improving the ability to positively identify terrorists. The Registered Traveler Program pilot program is currently being tested at airports throughout the United States. These weaknesses of selective screening systems raise the question of whether to spend security dollars on improving intelligence or on building more effective screening technologies. McLay et al. (2005c) explore this issue by performing a costbenefit analysis using concepts from applied probability and optimization. In their analysis, more effective (though more expensive) screening technologies are considered for screening selectee baggage, given a range of accuracy levels for a

34

prescreening system in assessing passenger risk. Several selective screening scenarios are identified that are preferable to screening all passenger baggage with explosive detection systems (EDSs), by reducing the number of successful attacks with moderate cost increases. They conclude that the accuracy of the prescreening system is more critical for reducing the number of successful attacks than the effectiveness of the baggage screening devices used to screen selectee baggage when the proportion of the passengers classified as selectees is small. Designing Effective Selective Passenger Screening Systems

Prohibitive costs, long security lines and questionable effectiveness in preventing attacks have impeded passenger screening initiatives. Significant infrastructure changes have been made at several airports to accommodate new screening devices,and passengers have been subjected to long lines in airport lobbies awaiting screening. Passenger screening system designs must consider the potential impact of cost, space, throughput and effectiveness. Three research efforts are highlighted that use operations research methodologies to design selective screening systems. One solution to this situation focuses on designing multilevel passenger prescreening systems. Multilevel systems are those in which an arbitrary number of classes for screening passengers are considered, rather than the two classes (i.e., selectees and nonselectees) currently being used.A class is a set of procedures using security devices for screening passengers.The nonselectee class,for example, may screen checked baggage with EDSs, passengers with X- ray machines and carry-on baggage with metal detectors. One way to improve selective screening systems is to use expensive baggage screening technologies with low throughput to screen passengers perceived as higher-risk. This has the potential to be a more cost-effective approach to screen passengers primarily by increasing throughput.Butler and Poole (2002) design a layered approach to screening passengers and baggage instead of the existing TSA policy of 100-percent checked baggage screening using EDSs by considering the economic impact ofusing different screening technologies. They consider three groups of passengers: lower-risk passengers who have volunteered for extensive background checks, lower-risk passengers about whom little is known and higher-risk passengers.They recommend screening baggage with three layers of baggage screening devices. By weaving passengers through three layers of security devices composed of EDSs, high-throughput backscatter and dual-energy X-ray devices, and hand searches, throughput is increased while the overall false clear rate remains at a level comparable to that of the 100-percent baggage screening mandate. Butler and Poole make similar recommendations for passenger screening. One implication of this screening system is that the resulting improved throughput indirectly decreases space requirements and waiting times in airport lobbies, which is of interest because many airport lobbies were not designed to accommodate extensive screening systems and excessively long waiting lines. Two multilevel passenger screening problems (that are for- mulated as discrete optimization models) give insight into how screening devices should be purchased and deployed (McLay et al. 2005a,b). An analysis of a greedy heuristic for the first problem suggests that using only two classes is particularly effective, which supports the two-class paradigm of Secure Flight. For the first problem, each of the classes is defined in terms of its fixed cost (the overhead costs), its marginal cost (the additional cost to screen a passenger) and its false clear rate, with a passenger prescreening system such as Secure Flight used to differentiate passengers. The objective is to minimize the overall false clear rate subject to passenger assignments and budget constraints. The second problem, a complementary problem to the first, considers screening devices that have been purchased and installed. The second problem illustrates how devices shared by multiple classes are used. Each class is defined by the device types it uses, and each device type has an associ ated capacity (throughput) in a given unit of time. Optimal solutions to examples with more available classes are more sen sitive with respect to changes in passenger volume and device capacity. This research suggests that incorporating prescreening systems into discrete optimization models provides insight into efficient selective screening systems. Conclusions

Operations research practitioners have the unique opportunity to make a difference in aviation security. New directions in aviation security need not merely be makeshift political solutions for mending complex problems; they can be the result of modeling, analysis and planning. By illustrating several ways in which operations research has made an impact in passenger prescreening systems, it is shown to have a place in the design and analysis of aviation security systems. Howev er, there are some limitations. When doing operations research modeling (or in fact, mathematical modeling of any type), one must often make assumptions that may limit the applicability of the results obtained. Though such assumptions are often based on reasonable and realistic factors, they may pose diffi culties in facilitating the transfer of the operations research analysis to decision-makers, since errors can lead to security breakdowns that may place people at an unnecessary risk. Sec ond, operations research models quite often look at an appli cations average or mean performance. In aviation security systems, average performance does not always capture the most interesting and salient aspects of such operations, which are often concerned with rare events and events "at the extremes." The issues discussed here represent but the tip of the iceberg. There are numerous problems in aviation security that can benefit from operations research methodologies, including improving perimeter access security with respect to airport employees,

35

designing models for cargo screening, analyzing passenger throughput and space associated with security lines, and modeling secondary screening of passengers and their bag- gage when screening devices give an alarm response, to name just a few. By using operations research methodologies to gain insight into ways to improve aviation security system opera tions and performance, our field can make a lasting impression on our nations security and well-being.

Acknowledgments The authors would like to thank Professor Arnold Barnett, George Eastman Professor of Management Science at MIT's Sloan School of Management, for his insightful comments that resulted in a significantly improved manuscript, as well as his numerous insights into applying operations research methodologies to improve aviation security. The research on aviation security conducted by Professor Jacobson and Professor Kobza has been supported in part by the National Science Foundation (DMI-0114499, DMI-0114046). Professor Jacobson's research has also been supported in part by the Air Force Office of Scientific Research (FA9550-04-10110).

Sheldon H. Jacobson (shj@uiuc.edu) is a professor at the Department of Mechanical and Industrial Engineering and director of the Simulation and Optimization Laboratory, University of Illinois at Urbana-Champaign. Laura A. McLay (lalbert@uiuc.edu) is a Ph.D. candidate at the same department. John E. Kobza (john.kobza@coe.ttu.edu) is a professor at the Department of Industrial Engineering, Texas Tech University. References

1. 2. 3. 4.
5.

A. Barnett, R. W. Shumsky, M. Hansen, A. Odoni, and G. Gosling, 2001, "Safe at Home? An Experiment in Domestic Airline Security," Operations Research, Vol. 49, pgs. 181-195. A. Barnett, 2001, "The Worst Day Ever," OR/MS Today, Vol. 28, No. 6, pgs. 28-31. A. Barnett, 2003, "Trust No One at the Airport," OR/MS Today, Vol. 30, No. 1, pg. 72. A. Barnett, 2004, "CAPPS II: The Foundation of Aviation Security?" Risk Analysis, Vol. 24, pgs. 909-916. V. Butler and R. W. Poole Jr., 2002, "Rethinking Checked-Baggage Screening," Reason Public Policy Institute, Policy Study No. 297, Los Angeles, Calif. S. Chakrabarti and A. Strauss, 2002, "Carnival Booth: An Algorithm for Defeating the Computer-Aided Passenger Screening System," First Monday 7, http://www.firstmonday.org/. S. H. Jacobson, J. E. Virta, J. M. Bowman, J. E. Kobza, and J. J., Nestor, 2003, "Modeling Aviation Baggage Screening Security Systems: A Case Study," IIE Transactions, Vol. 35, pgs. 259-269. S. H. Jacobson, T. Karnani, and J. E. Kobza, 2005, "Assessing the Impact of Deterrence on Aviation Checked Baggage Screening Strategies," International Journal of Risk Assessment & Management, Vol. 5, No. 1, pgs. 1-15. J. E. Kobza and S. H. Jacobson, 1997, "Probability Models for Access Security System Architectures," Journal of the Operational Research Society, Vol. 48, pgs. 255-263. L. A. McLay, S. H. Jacobson, and J. E. Kobza, 2005(a), "A Multilevel Passenger Prescreening Problem for Aviation Security," Technical Report, University of Illinois, Urbana, Ill. L. A. McLay, S. H. Jacobson, and J. E. Kobza, 2005(b), "Integer Programming Models and Analysis for a Multilevel Passenger Screening Problem," Technical Report, University of Illinois, Urbana, Ill. L. A. McLay, S. H. Jacobson, and J. E. Kobza, 2005(c), "When is Selective Screening Effective for Aviation Security?" Technical Report, University of Illinois, Urbana, Ill.

6. 7. 8. 9.
10. 11. 12.

13. J. E. Virta, S. H. Jacobson, and J. E. Kobza, 2003, "Analyzing the cost of Screening Selectee and Non-selectee
Baggage," Risk Analysis, Vol. 23, No. 5, pgs. 897-908.

Knowledge Base
36

Flying Scared: How Much Spending on Safety Makes Sense (ORMS Today, October 1996, Volume 23, Number 5)

Flying Scared: How Much Spending on Safety Makes Sense (ORMS Today, October 1996, Volume 23, Number 5)
Published by Robert E. Machol on 30/01/2007

Prompted by the TWA 800 disaster, the public and politicians are clamoring for action. But some basic cost/benefit analysis reveals that installing billions of dollars worth of equipment could be an e Flying Scared How Much Spending on Safety Makes Sense Prompted by the TWA 800 disaster, the public and politicians are clamoring for action. But some basic cost/benefit analysis reveals that installing billions of dollars worth of equipment could be an expensive mistake.

By Robert E. Machol
The July 28, 1996, issue of Time magazine shows heart-rending photographs of 38 of the 230 people who died when TWA 800 exploded. It is not inappropriate to mourn these people; but to put their deaths in perspective, note that on any given weekend we kill that many people on our highways and injure thousands more. Life is risky. Millions of Americans die every year. But America has a morbid fascination with any incident that involves a lot of people dying in one place at one time; and if the incident is an airplane crash, America as a whole becomes hysterical. It is not the purpose of this article to be cynical, but rather to take a rational, analytic view toward the questions that are facing us all: Should we continue to fly? Should we do more about preventing aircraft accidents? What should we do about terrorism? A long time ago I published in the OR/MS journal Interfaces an article entitled "How Much Safety?" [Machol 1986]. I strongly commend that article to readers of this one &emdash; I'm not much for higher mathematics, but there is a lot of interesting arithmetic in it. The Interfaces article began with the following quote from Airline Pilot magazine, a journal published by the Air Line Pilots Association: "The issue is not whether you have 10 B-747s operating in or out of an airport in one hour or one that comes in once a week. CFR (Crash/Fire Rescue equipment) requirements should be based on the need to protect passengers on the largest aircraft operating into that airport [Moorman 1986]." In my opinion, the assertion is wrong. As I wrote in the Interfaces article, "(The assertion) stems from the feeling that human life is priceless, and therefore no expense is too great if it has any possibility of saving lives. Alas, though many people feel that way, it is not a viable approach to system design in a world of finite resources."

37

$1 Billion for Every Life Some of the arithmetic in the Interfaces article shows that, unbeknownst to himself, Moorman was recommending an expenditure in excess of $1 billion for every life saved. I feel that such an expenditure is not justified. It should be clear that we are talking about a statistical life, not yours, or mine or any one in particular. If little Suzie is stuck in the bottom of a well and in danger of her life, we will willingly spend millions of dollars and get three people killed getting her out. Three years ago, I sent a memo to the administrator of the FAA saying: "You have stated that safety is the most important consideration for you as administrator of the FAA, and that it cannot be compromised for other considerations. Every other administrator has said the same thing, as have DOT secretaries, airline presidents, etc. None of you has any choice about saying this. The danger is that you might believe it." It was made clear to me that he was not amused. So, for a start, what should we do about airport safety and the detection of explosives? Some years ago we had a lot of hijackings; we solved that problem, mostly with a simple technology that prevents people from getting guns onto airplanes. So why do we not have equipment that prevents people from getting bombs onto airplanes? Nearly 10 years ago, largely through FAA funding, a company developed an explosive-detection system called TNA, for Thermal Neutron Activation. Later it was realized that "Activation" in connection with nuclear energy might scare people, so the name was changed, but to save the acronym the name chosen was Thermal Neutron Analysis. (If the FAA would give as much attention to real safety as to this kind of PR, we might be better off.) It was a big device weighing many tons, and costing about a million dollars per device. It had high levels of nuclear energy contained by heavy shielding. It could detect nitrogen, and there are essentially no useful explosives that do not have a great deal of nitrogen. One of the problems is that lots of other things &emdash; woolen sweaters, for example &emdash; also contain lots of nitrogen. Nonetheless, the FAA was on the verge of requiring every airport to install TNA equipment. I went to the California factory where the TNA was being developed and took along live explosives to perform an adversarial test. As every OR worker knows, the best way to prove that something works is to try as hard as you can to prove that it doesn't work, and hope that you fail. We put the explosives into several dozen of some hundreds of pieces of typical luggage &emdash; nobody but us knew which pieces &emdash; and put them through the TNA. It correctly detected about 85 percent of the explosives, and had some 15 percent false alarms (a "false alarm" means that you think you have detected explosives when no explosives are present). It might be assumed that further work would improve these numbers, but they are nowhere near good enough. I personally didn't worry too much about not detecting all the explosives. If you've got a system that detects explosives most of the time, the bad guys won't try to pack bombs in their luggage, assuming we continue to match baggage to passengers as we now do on international flights, and as we probably will on domestic flights by the time this article is published. (This is a case where we willingly accept extra expense and extra delay in exchange for extra safety &emdash; but as OR people know and many others do not know, the key questions are: How much expense? How much delay? How much safety?) I do worry about the false alarms. Suppose you get false alarms down to 5 percent. If you have a 747 going out with 600 bags &emdash; then you will "detect" explosives in 30 of them; at which point you must track down the owners and have them open their bags. And where will you do this opening? Right there by the gate when you think there is a bomb in the bag? Or take all 30 bags out of the airport to open them while the plane waits? The manufacturer of TNA asserted, and had demonstrated, that it could deal with 600 bags an hour. In practice, we found that it dealt with considerably fewer because, for example, big bags are mixed with little bags, some bags don't have handles, and so on. (This kind of practical observation will not come as a surprise to anyone who has actually done OR.) How many TNAs do you need at an airport like JFK that can put out 20 or 30 747s an hour during the evening? Remember, each is enormous, weighs tons, has a lot of radioactivity, and costs a million dollars. And you probably can't examine Delta bags at an Air France counter, or vice versa. (Queue theory is relevant here.) So if anybody asks you why we don't have explosive-detection equipment in our airports today, you can tell them that Machol is partly responsible.

New, improved techniques Since that time the FAA and some foreign countries have done excellent work in developing new and improved techniques for detecting explosives. One of these techniques is now being tested in a couple of U.S. airports, and several are in use in other countries. This is not the place to describe these techniques or evaluate their efficacy; that efficacy will improve with time. X-rays may be useful, especially if combined with an efficient CAT scan system, but X-rays cannot distinguish explosives from other materials. Most explosives give off vapors that can be detected with appropriate sniffers if the explosives are not very tightly wrapped and if there is some suction or blower that brings the vapors to the sniffer. Some airports put checked luggage in a vacuum chamber to simulate going to altitude &emdash; since the easiest way to fuse a bomb for destruction of a jet is to have it go off when the jet reaches altitude &emdash; but it isn't very hard for a bad guy to set the fuse to go off the second or third time the plane gets to altitude. Just remember that there are difficulties with these techniques, some of which are indicated above (e.g., how do you deal with false alarms?); remember that the fact that a technique is in use in some other country doesn't prove that it is actually useful; and remember that it probably would be a mistake to rush into the installation of billions of dollars worth of doubtful equipment to help

38

some congresspersons get re-elected. The public is clamoring for action, and politicians tend to pander to this kind of thing. Note that Congress passed a law a few years ago mandating bomb-detection equipment by a certain date, but the FAA proved (predictably) unable to invent on demand. Apart from airport security, how should we assess the safety of the aviation system, and how should we assess the FAA's performance? In the "How Much Safety" article in Interfaces [Machol 1986], we boasted that we had fewer than one fatality per billion passenger miles; I now realize that this is not a good measure. Barnett [1990] shows that the probability of getting killed on a randomly selected flight is the best measure. It comes out to about one in five million for scheduled commercial jets of any of the developed countries. For the undeveloped countries it may be 10 times as great (personally, I would be willing to face a probability of death of one in 500,000 if I were in a hurry, but many people might not feel that way). Nonscheduled and propeller-driven aircraft may be more dangerous; certainly very small aircraft such as air taxis and private planes, especially private planes flown by bold pilots, are more dangerous. (There is an old saying in aviation: There are old pilots and there are bold pilots, but there are no old, bold pilots). Scheduled commercial domestic jets fly roughly 500 million people a year and kill about 100 of them, or one in five million (1996 has been an outlier &emdash; although TWA 800 was technically not a domestic flight). There are very few activities you can undertake that are as safe. More than 100 people a year drown in bathtubs. (I don't have an actual source for that number, but I've heard it for years, and it does give pause). Most importantly, flying is much safer than driving. And that's why I am strongly opposed to the proposed infant restraint-seat rule.

Infant Restraint-Seat Rule The FAA now requires children over two years of age to have a separate seat, but those under two are permitted to be held on the lap of an adult, thereby saving the cost of an extra seat. The deceleration in a crash such as the DC-10 at Sioux City in 1989 causes so much force that the adult cannot hold the child. The child may be torn from the arms of the adult and slide down an aisle into a part of the plane which is afire. One child did, indeed, die needlessly in that crash, and now there is enormous political pressure to require every child to be strapped into an infant-restraint seat in his/her own private seat. Never mind that a tiny baby is going to be happier in a parent's lap. The best analysis I know showed that if this requirement were enacted, 80 percent of the two million under-two children who now ride free each year would fly in a purchased seat, but the other 20 percent would drive, along with their families, in order to save money. That's an extra million people a year driving. And because driving is so much more dangerous than flying, a dozen people would die and many more would be injured for every infant life saved on an airplane. Pro football TV analyst John Madden is well known for traveling back and forth across the country by bus because he is afraid of flying, although driving is more dangerous than flying. But comparing the safety of flying and driving is a very complicated business [Barnett, 1991], and all tied up with emotional considerations. I have always defined "risk" as a measure of the probability and severity of harm to human health, while "safety" is a subjective assessment of risk. If Madden is more comfortable driving, then maybe for him it is safer even if it is more risky. Furthermore, his large vehicle, driven by a professional chauffeur, is doubtless much safer than an ordinary auto. Another point: danger in driving is roughly linear with distance traveled, while in aviation danger is virtually independent of length of flight. In driving, danger depends on day vs. night, on age of driver, on speed, on size of car, on alcohol, and on and on. One needs to look at the OR/MS literature &emdash; e.g., articles by Barnett &emdash; to better understand these issues. Of course the system should be as safe as possible, and I have had quarrels with the FAA about things I thought should be done to increase safety. For example, for years I urged the FAA to increase the distance of smaller planes behind B-757s because I predicted that otherwise accidents would occur when the trailing plane was caught in the wake vortex of the 757. Only after two accidents (killing 13 people) occurred exactly as predicted did they finally increase that separation. I still feel that the wake-vortex issue is not being handled optimally. But this is not the place to take up such questions. On the overall question of safety, people must realize that if you put enough weight and cost into making it safer (for example, requiring a defibrillator on every flight in case a passenger suffers a heart attack, as has been seriously suggested) eventually the aircraft becomes a "lead sled" which is perfectly safe but is too heavy to take off unless you remove all the passengers.

White-knuckle fliers It is of interest to note that white-knuckle fliers tend to worry about the wrong things. In the past they have worried about mid-air collisions, and today they also worry about sabotage. Mid-air collisions are common enough between small aircraft at untowered airports [Machol 1979], but they just don't occur with domestic scheduled commercial jets &emdash; we haven't had such an accident in decades. (There was a midair collision over Cerritos, Calif., in 1986 involving a Mexican commercial jet.) The reason we don't have midair collisions is because our Air Traffic Control (ATC) system is extremely conservative and extremely effective. Sabotage is also very rare; we've had PanAm 103 and (possibly) TWA 800, but no accident that we know of to a scheduled domestic commercial jet due to sabotage. Mechanical failure, which people also worry about, is also rare. The most common cause of accidents to scheduled commercial jets is "controlled flight into terrain" in which the plane is flying along and, without mechanical failure or trauma to the pilot, or for any one of a dozen different reasons, slams into the ground [Machol, 1992]. Because the pilot doesn't think he is that close to the ground in such cases, the plane tends to be going at high

39

speed &emdash; much higher than for landing &emdash; and so there tend to be lots of fatalities. Transportation Secretary Federico Pena recently decreed for the first time that small planes (technically those with less than 30 passengers) must meet all the safety requirements of large planes. This may not make good sense. If you require, say, extra training for a pilot, this might make cost/benefit sense if the cost is split among 150 passengers, but might not if it is split among 15 passengers, each of whom must pay 10 times as much for this increment of safety. I'm not trying to analyze here how much training a pilot needs, but pointing out some elementary principles of cost/benefit analysis. And apart from that, it has been proposed that people should be given a choice between low-priced safe airlines and high-priced ultrasafe airlines. I think this is a reasonable topic for debate.

Grading the FAA And what kind of grade should we give the FAA? The FAA does have a sizable OR group; unfortunately, under the present administration, it has been used for other purposes and has not done a lot of OR. It should be noted that the administrator and the deputy administrator, the two top officials of the FAA, are nominated by the president and confirmed by the Senate, so they are both political appointees. However, some administrators have politicized the agency to the extreme, while others have tried to keep their activities as nonpolitical as possible. Every administrator would prefer not having an accident to having an accident. The question is: Where on his priority list does safety rank compared to pleasing Congress and getting good treatment in the media? This has varied from one administrator to another. But regardless of the administrator, there are two fundamental questions that must be answered: 1. 2. Which agency is ultimately responsible for aviation safety? To what extent should the FAA be responsible for advancing and supporting the aviation industry (especially airlines and aircraft manufacturers)?

The latter question has been raised repeatedly lately because of the assertion that promoting the industry is incompatible with maximizing safety. In 1935, a TWA DC-2 crashed, killing, among others, a U.S. senator. The CAA (predecessor of the FAA) investigated the accident and concluded that it was TWA's fault. TWA investigated the accident and concluded that it was CAA's fault. The Senate, which had a special interest because of the loss of one of its members, investigated the accident and decided TWA was right. Conclusion: You cannot have the agency that operates the ATC system investigate accidents which might involve fault in the ATC system. Accident investigation was subsequently given to the Civil Aeronautics Board (CAB). Today we have the National Transportation Safety Board (NTSB) which investigates accidents. I have enormous respect for the NTSB. They make safety recommendations, but it is up to the FAA (which must worry about efficiency and cost, as well as safety) to choose whether or not to accept these recommendations. Usually it does, but if it fails to implement an NTSB recommendation it is often subject to severe criticism. The FAA should not be expected to implement every NTSB recommendation &emdash; ultimately the FAA, not the NTSB, is responsible for Air Traffic Control.

Dividing UP responsibilities There is no ideal solution to this question of optimal division of responsibilities, but I feel that the present set-up is about right. Similarly, there is talk of splitting the FAA into two organizations, one of which would be responsible for ATC while the other would be responsible for safety. Note that the FAA has some 20,000 controllers (a few in towers, but many more underground looking at radar scopes), but it also has thousands of employees who certificate aircraft, pilots, spare parts, maintenance and dozens of other things, operate and maintain weather equipment, do R&D, and perform other safety-related functions. Once again, there is no ideal way of dividing up these responsibilities. I think that under a competent, apolitical FAA administrator, the present set-up is as good as any. There are alternatives as to who does the research on such things as aviation safety. In the United States, a great deal of this is done by the National Aviation and Space Administration (NASA) rather than the FAA. Finally, there are different relationships of civil to military aviation. The FAA reports to a civilian, the secretary of Transportation, but in time of war, the FAA automatically comes under control of the Department of Defense. In other countries, the civil and military are much closer; in the United Kingdom, for example, much civil-aviation research is done in military facilities, and the controller of the CAA (the British equivalent of the FAA) was recently an air marshal in the Royal Air Force. In most of these cases there is no obvious optimal way to deal with such matters. In conclusion, I think that flying is remarkably safe. I warn against hysteria stimulated by TWA 800. I think the FAA is spending about the right amount on safety, although I don't always agree on exactly how it is being spent. And I caution against rushing into poorly thought-out solutions which may do more harm than good.

References

40

1. Barnett, A., 1990, "Air Safety: End of the Golden Age?" Chance, Vol. 3, No. 2, pp. 28-32. 2. Barnett, A., 1991, "It's Safer to Fly," Risk Analysis, March. 3. Machol, R., 1979, "Effectiveness of the Air Traffic Control System," Journal of the Operational Research Society, Vol. 30, pp. 113-119. 4. Machol, R., 1986, "How Much Safety?" Interfaces, Vol. 16, No. 6, pp. 50-57 5. Machol, R, 1992, "Natural Hazards to Aviation," OR/MS Today, Vol. 19, No. 6, pp. 30-38. 6. Moorman,R., 1986, "Toward Safer Airports," Airline Pilot, No. 55, Vol. 2, pp. 10-13.

Robert E. Machol recently retired as chief scientist of the FAA. A frequent contributor to OR/MS Today, Machol is a past president of ORSA and a winner of its prestigious Kimball Medal for distinguished service to the society and to the profession of operations research.

Knowledge Base
SABRE Soars [Is Operations Research a dying profession ?] (ORMS Today, June 1998, Volume 25, Number 3)

SABRE Soars [Is Operations Research a dying profession ?] (ORMS Today, June 1998, Volume 25, Number 3)
Published by Thomas M. Cook on 30/01/2007

The high-flying SABRE Group, boosted by a series of successful applications of OR-based decision support systems, bodly refutes the notion that operations research is a dying profession

41

The high-flying SABRE Group, boosted by a series of successful applications of OR-based decision support systems, bodly refutes the notion that operations research is a dying profession. By Thomas M. Cook Editor's note: The author, senior vice president of The SABRE Group, delivered the keynote address at the CORS/INFORMS meeting in Montreal. Following is the text of that presentation. Seven years ago when I was president of TIMS, I had the pleasure to speak at this conference. I'm surprised I have been invited back so soon. Because I expected to see a lot of old friends and colleagues in this audience that were also in Nashville in 1991, I got out my old notes to make sure I didn't say the same things. In that speech, I made several points that I still believe are true. First, there is huge latent demand for OR-based solutions. Second, the lack of communication and cooperation between practitioners and academicians seriously impairs our ability to discover and satisfy this latent demand. Third, academics need to spend more time in the real world getting dirty working on real problems with real data rather than taking the easy road to tenure and promotion by publishing papers that interest only fellow researchers and have little practical value. Recently, I heard a story that is particularly relevant to the issue of OR in the business world. A president of an OR-based solutions company was having a staff meeting. Out of nowhere, an angel appeared in the middle of the conference table. The angel looked at the president and said, "I have some good news and bad news. The bad news is that you will die in the next week. The good news is that you will be going to heaven. "More good news is that heaven has become a very customer-centric organization and you have your choice of three things for eternity. The first is that you can be incredibly handsome, the second is that you can be infinitely wise, and the third is that you can have all the money that you can imagine." The president thought about the choices, looked around the table at his OR peers, and, perhaps under a little pressure from his group, said, "I choose infinite wisdom." The president's staff all nodded their approval. The angel said, "The last piece of good news is that you can have your wish effective immediately, before you die. Good luck." And then the angel disappeared. The president's staff all stared at the president, eager to see the expected serene, content smile of infinite wisdom. However, the president's face changed to a pained grimace. The staff member sitting closest to the president asked, "What is it boss, what does

42

it feel like to be infinitely wise?" The president answered, "I should have taken the money." That, my friends, is the problem intellect and wisdom alone are not always enough! Fourth, I indicated what I thought practitioners should do to be more successful and finally characterized AI and expert systems as a niche tool of limited value that was getting a disproportionate share of R&D dollars in the late 1980s and early 90s. Today, I would like to explore the hypothesis that operations research is a dying discipline. Many have pointed to a reduced demand for OR talent coming out of graduate schools, the elimination of OR in the core MBA curriculum, and reduced attendance at national meetings as evidence that our profession is in the winter of its life cycle. I categorically reject that hypothesis based on many data points that refute the notion that ours is a dying discipline. I will present three data points that support this position first, the existence and success of companies like I2 Technologies, Manugistics and The SABRE Group; second, the market demand for yield management as a way to increase revenue; and third, the use of OR to solve the very complex airline scheduling problem. Company History

Let me first give you a chronology of the evolution of The SABRE Group, and explain the absolutely critical role our profession has played in the growth of a $2 billion annual revenue company with a market capitalization value of over $4 billion. When I joined American Airlines in 1982, there was no SABRE Group. There was an internal IT organization named DP&CS and an OR group called the OR department. I joined as the director of OR. During the next five years, the OR department evolved from a small, 12-person group working on relatively unimportant and isoteric problems to a group of 75 OR professionals building models for important decision-support systems, and performing strategic studies to support senior management decision-making at American Airlines. In 1988, we formed AA Decision Technologies, or AADT, as a wholly owned subsidiary of AMR, the parent company of American Airlines, and began selling our services externally, in addition to our work for American Airlines. During the next five years AADT grew from 75 people to almost 600 people. By 1993, although our American Airlines' business had grown, 80 percent of AADT's revenue was external to AMR. In 1993, The SABRE Group was formed within AMR. The original SABRE Group consisted of four primary organizations SABRE Computer Services, SABRE Development Services, SABRE Travel Information Network and AMR Information Services. SABRE Travel Information Network was the $1 billion travel distribution business focused exclusively on travel agencies. SABRE Computer Services provided the data center, communications network, and desktop support for American Airlines and the travel distribution business. SABRE Development Services provided all applications development, and AMR Information Services marketed SABRE and American Airlines legacy systems externally. In 1993 I proposed to Bob Crandall (recently retired chairman and CEO of AMR, see accompanying story) that AADT join The SABRE Group, and that we merge SABRE Development Services, AMR Information Services and AADT into one organization, which became known as SABRE Decision Technologies. He liked the idea, and we took on the arduous task of transforming an internal application development group into a market focused group and merge the three different cultures into one culture that could win. In 1996, The SABRE Group was created as a legal entity. Eighteen percent of the stock was sold to the public, with 82 percent retained by AMR. In 1997 we decided to reorganize The SABRE Group and merge SABRE Computer Services, the operational arm of TSG, and SABRE Decision Technologies, creating a comprehensive information technology solutions division named SABRE Technology Solutions. Today, this new organization employs 8,000 professionals and will earn more than $1 billion in revenue this year. As we plan for The SABRE Group's future, it is clear that the SABRE Technology Solutions division is the growth engine for the entire company. Our strategy is to become a major IT outsourcing company exploiting our knowledge of the travel and transportation industry and our portfolio of solutions. What differentiates us from our competition IBM, EDS, CSC and others is our ability to add significant value through the application of our OR-based decision support systems. Our competition attacks the IT spending of their clients; while savings can

43

total 20 percent to 30 percent of the IT budget, the IT budget is only 3 percent on average of total spending. Therefore, our competitors' value proposition is an overall saving of less than 1 percent of operating expenses.

SABRE's proposition is much more compelling. Not only do we offer the traditional 20 to 30 percent savings in IT spending, we also attack the remaining 97 percent of spending by applying OR-based systems to their operational problems. Generating incremental revenue through implementation of model-based systems in areas of pricing, yield management and scheduling provides even more potential value to a prospective client. The point is that our discipline has been critical in creating a company worth over $4 billion, and is the key to our strategy of growing that company aggressively in the future . Earlier I said I was going to discuss a few specific data points that refute the notion that OR is a dying profession. Actually, there are many data points, not the least of which are the Edelman finalists every year. Yield Management

The second data point that refutes the notion that OR is dying is yield management. In 1991, we won the Edelman Prize for yield management at American Airlines. For those of you who don't know what yield management is, let me briefly explain the problem. American Airlines and American Eagle operate over 4,000 flights per day and start taking reservations 330 days prior to departure. Each market has multiple fare classes with an inventory of seats the airline is willing to sell at a particular price, at any point in time. The job of yield management is to forecast demand by inventory bucket, and optimize the decision of whether or not to sell at a price the customer is willing to pay now or wait for higher-value, late-arriving demand. It's the classic decision of whether to take a bird in the hand or go for two in the bush. Yield management tries to optimize this decision process by using sophisticated forecasting and optimization techniques to determine how much to overbook each individual flight, and how to allocate the seats amongst the hierarchy of inventory buckets. We have estimated that the yield management system at American Airlines generates almost $1 billion in annual incremental revenue. To put this in perspective, 1997 was the only year in its history that American Airlines has had operating earnings approaching $1 billion. What is even more exciting is that virtually every airline of any consequence has realized that they cannot compete effectively without a yield management system. In other words, they can't live without OR! In addition to revolutionizing how airlines manage the revenue side of their business, OR-based yield management systems are becoming increasingly important to cruise lines, hotels, passenger rail companies, broadcasting networks and other organizations that need to book perishable inventory in advance. We are currently working with Club Med, Hyatt Hotels, the French National Railroad (SNCF), AMTRAK and Royal Caribbean Cruise Line; and even the U.S. Navy is applying yield management to their unique environment. The Navy engagement illustrates how robust the concepts of yield management are. Several years ago people from the Navy met me at one of these meetings and informed me that they had a yield management problem and wanted SABRE to help solve the problem. Tom Blanco described the Navy's training problem to me perishable inventory of training slots or classroom seats that needed to be booked in advance, and the desire to overbook classes to accommodate late cancellations and hold back some inventory to save for late-arriving, highvalue demand a classic yield management problem solved by OR. Airline Scheduling

The third and final data point that leads me to reject the hypothesis that ours is a dying profession is how our discipline is revolutionizing airline scheduling practices. Last year we won the Edelman prize with The French National Railroad SNCF for applying OR to scheduling the TGV (the French high-speed train network). I think we will have another strong Edelman Prize contender when we find time to enter our airline scheduling solution in the competition. Scheduling an airline, especially a large airline, is an extremely complex problem with thousands of constraints and millions of variables. Developing a high-quality schedule capable of attaching incremental revenue and market share is strategically critical to an airline's success. Although we have been working on the airline scheduling problem for well over 15 years, it has only been in the last six or eight years that OR has made a truly significant contribution to airline scheduling practices.

44

It started at American Airlines when I assigned a small group of OR people to the Capacity Planning department to determine the feasibility of designing and implementing the next generation scheduling system. Taking a chapter out of Gene Woolsey's book, I wanted these guys to learn how to schedule the airline before thinking about the next generation system. It took at least six months before they started thinking about what the system should do and could do. A few of the basic decision variables of the airline scheduling problem are:

Where to fly (markets)?

How often to serve a market (frequency)?

Over what hub to serve a particular market (for example, American Airlines has two east-west hubs, DFW & ORD)?

What time of day to fly (customer preference)?

What type of aircraft to assign to each route (for example, American Airlines and American Eagle have several dozen different aircraft types, all with different capacities, ranges and other operating characteristics)?

What flights to designate as through-flights, etc.?

It was clear that the problem would need to be decomposed if we were going to be successful. To make a long story short, the system that was designed and implemented was a model-based system that relies on sophisticated forecasting models and optimization models that optimize decisions like aircraft assignment, aircraft routing and through-flight assignment. The system has been benchmarked at American Airlines to contribute hundreds of millions of dollars to the bottom line each year. Because of the power of the forecasting and optimization imbedded in the system, SABRE has been successful selling the system to airlines like United, Delta, Northwest, US Air, Lufthansa, Swissair, Air France, Air New Zealand and many other airlines around the world. OR Challenges

Because of the leverage created by model-based systems, we are continuing to invest millions of dollars in research annually. These examples are only a small subset of the success stories within our profession. No other discipline offers the potential we have to offer. Why, then, haven't we conquered the world? Let me first say what I don't believe the answer is:

There is no shortage of complex problems that are amenable to a modeling approach.

45

There is no shortage of OR talent.

There is no real competition from AI or expert systems.

There is no lack of computing power.

Lack of data is becoming less and less of a constraint.

There is no lack of good algorithms or modeling techniques.

If we are willing to earn it, there is no lack of senior management support.

So what do we as OR professionals have to do to be more successful and enhance our collective contribution to society? Let me suggest that the single most important thing an academic can do is to get out of academia. This departure from the university could be for a day, a week, a summer, a sabbatical, or possible career change like I did 18 years ago. The key is that an OR professor has a lot to contribute to the solution of real problems in the real world, but unless he or she is willing to make a significant investment in time and energy, it won't happen. I get a lot of calls from academics looking for problems and data that they and their graduate students can take back to the lab and create a solution. These efforts almost always fail because the academics involved rarely spend the time necessary to really understand the problem, and/or work with end users to ensure the solution is implemented. If you look at the most successful academicians in our profession, almost every one has made the investment in the real world that I am advocating. Tom Magnanti, John Little, Ellis Johnson, Art Geoffrion, Dick Larson, Yossi Sheffi and Karla Hoffman are just a few of the people that combined their academic careers with large contributions to the practice of OR. I would encourage all of the professors in the audience to follow their example. OR Learnings

For the practitioners in the audience or those professors that venture into the real world, let me offer some things I've learned over the past 20 years, often times the hard way, that you might use to be more successful.

Learn to take the broad view of OR. Rather than only dealing with optimization problems, be an end-to-end problem solver. Your impact within the organization will increase, because it's not just the tools you have to offer, it's your approach to problem solving.

46

Learn to listen. Be sure to allocate enough time up front to understand what the problem really is before you try to solve it.

Be selective about the projects you pursue. Be sure it's a project that's important to the organization, and one that you can do.

Earn the respect of senior management. Since most key decision-makers had to work their way up the ladder, our degrees won't automatically bring us respect. We have to earn it the same way they did, through small successes.

Learn to communicate. It isn't a real solution unless we can communicate it to others, and motivate them to use it.

Learn to "package" or market your product, solution or deliverable. Make sure the recommendations are well thought out, well written, and backed up with the proper data. If you're delivering a model or system, be sure to include good user manuals, good help screens, good user interface and so on. Package your deliverables so others can easily understand how these solutions will impact the organization positively. A little "glitz" will go a long way.

At the same time, don't over-promise. Be able to deliver what you promise, when you promise it and on budget.

Be willing to accept a suboptimal solution. Since we are talking about real-world solutions, remember that getting part of the solution implemented is better than a brilliant theoretical solution with no execution.

Finally, retain control of your project and be willing to follow-through on it. A lot of OR projects fail because the OR professional thinks the problem is solved by formulating the model on paper or building the prototype. But I've learned the hard way that the problem is not solved until the solution is being properly used by the end-user. It is essential that you follow-through.

In summary, I think our profession is not dying - it is alive and well. We are capable of making a much larger contribution and achieving greater personal success if we can learn from our most successful colleagues, and make the investment of time and energy to sell our unique approach to solving problems and delivering on our promises.

Knowledge Base
Corporate High Flyers (ORMS Today, December 1999, Volume 26, Number 6)

Corporate High Flyers (ORMS Today, December 1999,

47

Volume 26, Number 6)


Published by Pinar Keskinocak on 30/01/2007

As popularity of elite aviation reaches new heights, time sharing of business jets presents challenging problem for operations researchers

In today's global economy, a growing number of executives or "elite" travelers believe that private aviation is the only smart way to travel. With the number of passengers increasing every year, travelers face several problems (see box) when using commercial

48

flights, including: delays, cancelled flights, being "bumped" from a flight or downgraded due to overbooking, no direct flights between certain cities (especially small cities), long connection times, long check in times, misplaced baggage and lack of enough first class or business class seats [Wade 1995, 1997]. Private planes can save huge amounts of time, as well as provide comfort, convenience and privacy. However, due to its high cost (as much as $30 million for a Gulfstream jet), operation and maintenance expenses, a private jet isn't for everyone. Luckily, elite travelers or companies seeking more efficient use of executives' travel times have an option that can be cost effective: buying a share of a business jet. Fractional jet ownership programs offer companies and individuals all the benefits of private flying, without the high cost of complete ownership and without the headache of maintaining a corporate flight department with its own maintenance staff and pilots. A fractional owner purchases a portion of a specific aircraft based on the number of actual flight hours needed annually. For example, one-eight-share owners get 100 hours flying time per year, while one-quarter-share owners are entitled to 200 hours. Fractional owners have access to the aircraft any day of the year, 24 hours a day, with as little as four hours notice. Fractional jet ownership programs are relatively new, but are growing at a fast pace. NetJets (operated by Executive Jet Aviation, www.netjets.com), Flexjet (operated by Bombardier Business JetSolutions, www.aero.bombardier.com/htmen/A7.htm) and Raytheon Travel Air (a subsidiary of Raytheon Aircraft, www.raytheon.com/rac/travelair/) are among the leading fractional jet ownership programs. NetJets, which has the largest market share, offers up to 12 different aircraft types, including Cessna Citation, Raytheon, Gulfstream and Boeing jets (Zagorin 1999). Executive Jet Aviation revenues were projected at $900 million for 1998 and climbing at an average rate of 35 percent annually. Introduced in May 1995, the Flexjet program offers Learjet 31A, Learjet 60 and Challenger aircraft. FlexJet has more than 350 clients, and its current growth rate is estimated at 100 new fractional owners per year. Raytheon Travel Air program, which was started in 1997 and is currently serving more than 300 fractional owners, features three of the aircraft in the Raytheon Aircraft product line: Beech King Air B200, Beechjet 400A and Hawker 800XP. To "book a flight," a fractional owner calls the company managing the jets and specifies the departure time, departure location and destination. In contrast to chartering, the owners are charged only for actual flight time, not for deadhead and repositioning (i.e. for the time it takes a plane to reach the customer for pickup and to return to its base after drop-off) [Del Valle 1995]. A partial owner pays three separate fees for this program: a one-time purchase price for the fractional interest in the plane; a monthly management fee, which covers maintenance, insurance, administrative and pilot costs; and an hourly fee for the time the jet is used. For example, the purchase price of one-eight-share of a Gulfstream IV-SP jet is $4.03 million, management fees are $20,500 a month and the hourly rate is $2,890 [Zagorin, 1999]. Ownership rights usually expire after five years. Like full ownership, fractional ownership provides tax benefits to the buyer and can usually be sold back after a few years. The program has its widest appeal among small to midsize private companies who have business travel requirements, but cannot justify the cost of an entire business aircraft. Other owners include private individuals, celebrities, top executives or corporations looking to supplement their corporate flight departments' requirements. Fractional ownership best fits the needs of individuals and companies who fly between 50 and 400 hours a year [Levere 1996]. Full ownership is typically cost-justifiable when the annual flight hours exceed 400. Chartering tends to be cost efficient for flying less than 50 hours a year on day trips. One has to keep in mind that chartering an aircraft whenever needed is not guaranteed, whereas most ownership programs guarantee the availability of the aircraft to the partial owners any time of the day. Challenges in Managing a Fleet of Time-Shared Jets

To achieve their ambitious goals on growth, profitability and customer satisfaction, fractional ownership programs need to manage their operations in an efficient way, and this can be a challenging task. At the strategic level, capital investments for growing the fleet or for increasing the size of the crew have to be made carefully, since aircraft are very expensive to purchase and to maintain. At the operational level, aircraft and crew need to be scheduled and maintained to satisfy the customers' requests on time, at minimum possible cost to the company. The aircraft-scheduling problem that has to be solved on a day-to-day basis can be described as follows: At any time, aircraft might be serving a customer or parked at one of many different locations. New customer requests arrive, each consisting of a departure location, departure time and destination. Usually it is necessary to relocate the aircraft to the departure locations. These flights are called "positioning legs" or "empty flights." Every customer request must be satisfied on time, possibly by subcontracting extra aircraft. Two major types of costs need to be considered in scheduling the aircraft: operating costs (fuel, maintenance, etc.) for flying the

49

aircraft and the penalty costs for not being able to meet some customer requests without subcontracting extra aircraft. One has to create a flight schedule for the fleet to satisfy the customer requests (by subcontracting extra aircraft if necessary) at minimum cost under additional constraints of maintenance requirements and pre-scheduled trips. Each aircraft has to go to maintenance periodically. When an aircraft comes out of maintenance, it can fly only a limited number of hours before its next maintenance, so the schedulers have to make sure that the available flight hours of the aircraft are not exceeded in any schedule. Similarly, an aircraft can do only a limited number of landings before its next maintenance. Some aircraft may have previously (or manually) assigned trips to them, and these assignments should not be changed while creating a new schedule. For example, a pre-scheduled trip may actually be a scheduled maintenance. Maintenance is done only at certain locations, and if an aircraft is scheduled for maintenance at a certain location at a certain time, the schedulers have to make sure that it will be there on time. Since customers only pay for actual hours flown, minimizing the cost of empty flight hours and subcontracted hours is the main objective. Each client owns a share of a particular type of aircraft; however, in certain cases it is possible to substitute one type of aircraft for another. In general, bigger and faster aircraft - for example, a Learjet 60 - can be substituted for smaller and slower aircraft, such as a Learjet 31. Companies managing these jets use the substitution flexibility, especially if there is excess demand for a particular type of fleet. However, allowing substitution between different types of aircraft also makes the scheduling problem larger and more complicated. Note that the time-shared jet aircraft scheduling problem is different from the commercial airline scheduling problems in several respects. First, since many customers use these aircraft to travel between remote locations where not many commercial flights are available, it is very difficult to forecast the demand. Second, schedules have to be changed dynamically as new customer requests arrive. Since partial owners can request to use an aircraft any time of the day, the schedules have to be flexible to accommodate new trip requests. Third, a tradeoff has to be made between minimizing the costs due to positioning legs and the costs due to subcontracting. With these in mind, the time-shared jet aircraft-scheduling problem, in isolation from the crew issues, resembles the multiple depot vehicle-scheduling problem with additional constraints [Bodin et al. 1983]. Aircraft schedules also have to be coordinated with crew schedules. For example, in the NetJets program, workday schedules (also called a duty roster) are published 60 days in advance so that pilots can easily plan for birthdays, anniversaries, etc., and maintain a home life outside of their regular work schedule. Different work rules, such as minimum off-day requirements (e.g. at least two out of every seven days should be assigned off-duty for each pilot) and maximum work stretches (i.e., maximum number of consecutive days a pilot is on-duty), need to be considered while creating duty rosters, long before the actual trip requests are known. In addition to the work rules, which are based on contracts or FAA regulations, one also has to consider pilot qualifications to make sure that enough pilots with the desired qualifications are available to fly the aircraft on any given day. To fly most jet aircraft, at least two crew members are needed, where one is qualified as a "pilot" and the other is qualified as a "co-pilot." For example, a crew member who qualifies as a "pilot" for a Lear 31 jet and as a "co-pilot" for a Lear 60 jet may not be qualified as a "pilot" for a Lear 60 jet. Note that having the same number of crew members (with arbitrary qualifications) on-duty every day does not guarantee enough crew coverage for each aircraft type. Related to the problem of creating flexible crew workday schedules is the strategic question of how many crew members with each possible set of qualifications is needed, and how much the company has to invest for training the existing crew members to attain additional qualifications. In terms of operational flexibility, it is advantageous to have crew members who are qualified to perform more than one task. However, training is costly and with each additional qualification, the salary of a crew member, as well as his likelihood of leaving the company for a job elsewhere, also increases. The strategic and operational problems faced by the time-shared jet ownership business are challenging problems for OR/MS researchers as well as practitioners. An initial attempt at studying some of the problems faced by this growing industry has been conducted by Keskinocak and Tayur [1997, 1998], but the area is still wide open for innovative research.

Commercial Air's Growing Pains

The first commercial flight in the United States occurred on Jan 1, 1914 in Florida when Tony Jannus flew A. C. Pheil the 21 miles across the bay from St. Petersburg to Tampa in a two-seat Benoist at an altitude of 15 feet. Eighty-four years later, in 1998, U.S. carriers flew more than 615 million passengers.

50

Unfortunately, as the number of commercial passengers soared, so did the number of complaints. In an effort to maximize revenue, airlines routinely "overbook" seats, especially during business hours. According to figures reported to the Department of Transportation, the nine major domestic airlines "bought out" 757,000 volunteers in 1994; another 52,000 passengers on these airlines were listed as "bumped" or involuntarily denied boarding. In January 1996, only 62.7 percent of all domestic flights by 10 major airlines were reported on time, that is, on the ground within 15 minutes of their schedule. Late arrivals meant late departures and delays for later flights at other airports. For travel between relatively small or remote cities, it may be difficult to find a flight, let alone a direct flight. Flying from Rochester, N.Y., to Wilmington, Del., on a commercial airline takes 10 hours with connections, but less than two hours on a private jet. In some cases, the trip between small cities may even be "intermodal," which means that although the airline ticket covers the whole distance, part of the trip is traveled by bus. Among other problems commercial air travelers face are long check-in times and misplaced baggage (luggage that has been lost, stolen, delayed or damaged). In 1996, the 10 biggest U.S. airlines reported almost 2.4 million complaints over misplaced baggage, an average of 5.3 for every 1,000 passengers. Sources: Department of Transportation, New York Times.

References

1. 2. 3. 4. 5. 6. 7. 8.

L. Bodin, B. Golden, A. Assad, M. Ball, 1983, "Routing and Scheduling of Vehicles and Crews: The State of the Art," Computers and Operations Research, Vol. 10, pgs. 63-211. C. Del Valle, 1995, "Can't Afford a Lear? Buy a Piece of One," Business Week, Sept. 11. P. Keskinocak and S. Tayur, 1997, "Assigning off-days to multiple types of workers in seven-days-a-week operations," GSIA working paper, Carnegie Mellon University, Pittsburgh, Pa. P. Keskinocak and S. Tayur, 1998, "Scheduling of Time-shared Jet Aircraft," Transportation Science, Vol. 32, pgs. 277294. J.L. Levere, 1996, "Buying a Share of a Private Jet," The New York Times, July 21. B. Wade, 1995, "Passengers Play the Airlines' Bumping Game," The New York Times, May 21. B. Wade, 1995, "When the Plane is Really a Bus," The New York Times, Dec. 14. A. Zagorin, 1999, "Rent-a-Jet Cachet," Time Magazine, Sept. 13.

Pinar Keskinocak is an assistant professor in the School of Industrial and Systems Engineering at Georgia Institute of Technology. She has worked on the development of a decision-support system for aircraft scheduling and crew off-day assignment for one of the leading fractional ownership programs.

Knowledge Base
51

Cashing in on E-commerce (ORMS Today, Volume 27, Number 3)

Cashing in on E-commerce (ORMS Today, Volume 27, Number 3)


Published by Arthur Geoffrion on 30/01/2007

The value of operations research soars as the digital economy takes wing Cashing in on E-Commerce The value of operations research soars as the digital economy takes wing By Arthur Geoffrion

If operations research has been important to the offline economy for the last half century - as amply demonstrated by other articles in this issue - it is destined to be still more valuable to the digital economy as businesses rethink their basic value propositions and in some cases seek to reinvent themselves. This is especially true in such areas as supply chain management, dynamic pricing/yield management and Internet marketing, which are the focus of this article. Supply Chain Management. The distinction between tangible goods like cars and computers, and (digitizable) information goods like magazines and music, comes up constantly in the e-business context. When completely digitized, information goods incur essentially no production cost after the first unit, no transportation cost and no inventory cost. Tangible goods do not enjoy these magical qualities, and require supply chains whose combined annual costs are approaching $1 trillion in the U.S. Based on forecasts by Forrester Research, Inc. for North America in 2004 [1], about two-thirds of projected business-to-consumer (B2C) e-commerce revenues ($204 billion) and four-fifths of projected business-to-business (B2B) revenues ($3.25 trillion) will be in tangible goods rather than in information goods. Hence, supply chains are destined to play a monumentally important role in the digital economy. Their great importance is already plain, as the business press has been proclaiming emphatically since about mid-1999. This guarantees a major role for operations research, since a half-century of experience teaches that supply chain excellence is very difficult to achieve without quantitative modeling. Planning and scheduling are the most fruitful areas for OR to improve supply chains. Planning includes deciding which plants to

52

have and what major equipment they should contain, which products to produce where, how many warehouses to have of which type, and what size fleet to have. Scheduling includes deciding when and what to produce, store and move. Success stories in both planning and scheduling - at Hewlett-Packard [2], IBM [3], Sears [4] and hundreds of other companies big and small - are plentiful, and have been well documented in this magazine as well as Interfaces, the practice-oriented journal of the Institute for Operations Research and the Management Sciences (INFORMS). Without sound planning and scheduling, even excellent execution will cost more than necessary and will not provide desired service levels. This was the sad experience of many firms that implemented ERP (Enterprise Resource Planning) for, in reality, there was essentially no P in ERP. Now there is a brisk demand for supplemental APS (Advanced Planning and Scheduling) software, and all the ERP vendors are scrambling to incorporate better planning and scheduling into their products, along with Internet-enabled connectivity. The universal objective is enterprise-wide "optimization," a term that to many is all but synonymous with operations research. Although everyone wants "optimal" plans and schedules, most planning and scheduling software currently available from APS and ERP vendors produces solutions that may be workable but only faintly approximate being truly optimal. Over time, supply chain management will migrate to software solutions that incorporate better operations research technology. Competition among ebusinesses, exacerbated by the rapid emergence of e-markets and associated opportunities to reconfigure supply chains dynamically, will hasten this migration. Dynamic Pricing and Yield Management. You have limited and perishable capacity - seats on an airplane, time on an expensive manufacturing line, hotel rooms, communication bandwidth, etc. - and you want to gain the most revenue possible from this capacity. A ship that sails with an empty stateroom, idle manufacturing capacity or an empty theater seat are wasted revenue opportunities. The closer to the time when capacity will expire (e.g., a sailing date), the more inclined you are to discount from the usual price structure, or perhaps, on the contrary, to raise prices for desperate last-minute buyers. But by how much and when? This kind of question, which requires highly technical analysis, gives rise to the closely related OR subfields of yield management, revenue management and dynamic pricing. The results have been spectacular. For example, yield management has generated "almost $1 billion of incremental annual revenue" at American Airlines according to Tom Cook, former president of The Sabre Group, Inc. [5] (See related article The Sabre Story [page 46]). This figure exceeds the airline's total operating earnings most years. In another dramatic example, National Car Rental executives credited revenue management for saving the car rental company from liquidation and returning it to profitability [6]. The sophisticated analytical methods responsible for such successes are propagating rapidly beyond the airline industry that gave birth to them. The advent of the digital economy is making dynamic pricing-type questions ever more important. One reason is that communication with customers is so much better thanks to e-mail and the Web. These technologies create the opportunity to change prices globally and instantly at little or no cost as drop-dead dates for capacity commitment approach. At the same time, they facilitate rapid customer responses to price changes and quick response to that feedback. This is true not only for B2C ecommerce, but also for B2B. For example, yield management systems are now being integrated with core business processes by online leaders in last-minute travel deals, natural gas trading, tickets and other industries [7]. Dynamic pricing and yield management are destined to play a crucial role in online auctions and e-markets, presently the most explosively expanding part of e-business. Business processes incorporating these ideas can be designed properly only with the help of analytical methods. Internet Marketing. Marketing science is one of the most important OR subfields for success in B2C, and to some extent in B2B as well. Many familiar topics take on a new dimension with the advent of the Web. For example, understanding online consumer behavior and price sensitivity becomes one of the keys to successful e-tailing. Channel conflict is now of great concern to manufacturers adding a direct Internet channel. The Web is a new medium for new product diffusion and brand building. And, at last, the technology is at hand to enable one-to-one marketing for many kinds of customers. Other marketing topics arise for the first time, such as how to mine clickstream data and Web logs for information that can help refine Website designs and online marketing tactics, how to dynamically optimize the choice of ads to be served from ad inventory each time a Web page is requested, and how to design a "clicks and mortar" strategy. The business media often treat all Web marketing topics indiscriminately as novel phenomena without historical precedent. But in fact, many well-known theories and techniques of marketing science, sometimes combined with other OR techniques, can be adapted to good effect [8]. Quantitative analysts who study such issues, often called Internet Marketing specialists, are becoming increasingly influential in e-business [9,10]. Why OR Is So Valuable

53

We have looked briefly at supply chain management, dynamic pricing/yield management and Internet marketing. Space permitting, it would be easy to establish the key role of operations research in such other areas as the telecommunications infrastructure for e-business (especially hardware manufacturing and network planning, operations and management) and online financial services (such as prepayment modeling for equity-backed loans and services pricing) [11]. (See related stories on Level 3 [page 38] and Merrill Lynch [page 48].) Why is OR playing a key role in bringing the emerging digital economy to life, and in helping e-businesses compete effectively in this economy? There are at least six answers to this question, and they are not the ones most commonly given in the context of the offline economy. Each springs from an attribute of the digital economy: 1. OR excels at squeezing value out of data, which is being created in ever-increasing abundance by e-businesses. Think of Web log data, clickstream data, mobile scanner data, ERP data, process data stashed in data warehouses, customer data collected for personalized marketing programs, etc. Statistics and data analysis, including data mining, are core competencies of operations research. Coupling these with decision technology enables organizations to make sense of the data flood and turn it into actionable knowledge. 2. OR copes well with complexity because it is congenitally analytical in approach. Analysis literally means breaking up a complex whole into its component parts - just what is needed to cope with an e-business revolution that is exacerbating organizational complexity by demanding more alliances, more inter-company focus, and better integration of business processes of all types. 3. OR has good tools for managing risk and coping with uncertainty. That's what statistics, decision analysis and probabilistic modeling are all about. The pace of business and technological change is now so rapid that the past is a poor guide to the present, let alone the future. The successful business practices of the past are often rendered obsolete, along with experiencedbased business intuition. In this situation, it is prudent to invoke model-based approaches to decision-making that quantify risk explicitly. 4. OR is perhaps the most reliable way known to achieve a truly deep understanding of many kinds of business issues. One of the characteristics of the digital economy is a greater management premium put on deep understanding. Knowledge is increasingly recognized as a key competitive asset, and there is widespread interest in methods for knowledge discovery, knowledge management and protecting intellectual capital. It can be argued that no phenomenon, even man-made ones like business processes, is really understood until it can be measured and modeled. OR is the branch of knowledge that specializes in the scientific analysis of business issues and processes, with the eventual goal of understanding them deeply. To illustrate, IBM recently won a major INFORMS-sponsored award for achieving a model-based understanding of their inventories sufficiently deep to enable them to capture more than $750 million in savings during a single year [3]. 5. OR has plenty of experience performing virtual "experiments" that study business issues without risking damage to a company's assets or financial performance. This trick is performed with the help of models, mathematics and computer simulation, and is increasingly valuable in a business era where intuition-busting novelty is the order of the day and the consequences of failures can be multiplied many times by the global reach of the Web. 6. OR thrives in situations where the same kind of decision must be made very quickly (perhaps in real time, as in many Web applications) and repeatedly, for it can furnish decision technology that can be embedded in the information systems and ebusiness software that implement business processes. E-business is fundamentally about automating business processes and coupling these to Internet (often Web) interfaces with other companies or customers. Many decisions that used to be made manually are being converted to full-automatic - even home equity loans. Doing this to good economic effect, and in a way that satisfies other companies and customers, usually requires decision technology as well as information technology. Rules of thumb that work well even 99 percent of the time can generate huge economic penalties and dissatisfaction when used on a high-traffic Website. The solution is to install solid decision technology wherever possible. As a professional society, INFORMS recognizes and embraces the growing importance of the digital economy. For example, Interfaces will publish a special issue on e-business applications of operations research later this year, and follow it with a regular Forum on the same theme. Other INFORMS journals have published or will publish special issues on applicable research relating to e-business, and a new special interest section of INFORMS on e-commerce has just been born. INFORMS will hold a conference for OR practitioners in May 2001. The meeting theme: "Optimizing the Extended Enterprise in the New Economy." (See related article on page 72.) All this adds up to INFORMS as a fertile source of information and expertise increasingly aligned with today's e-business challenges.

54

Getting OR Help for E-Business If operations research is so crucially important to e-business, the logical next step would seem to be for e-businesses to add people with these skills to their staffs if they don't have them already in sufficient numbers. Unfortunately, this is easier said than done; the competition for OR talent is unusually fierce these days. [12] Fortunately, this cloud may have a silver lining. It turns out that many OR tools can be accessed or delivered via the Web. For example, a great deal of optimization software is available at such sites as NEOS (www-neos.mcs.anl.gov/), and statistical resources are now online through such sources as the Interactive Statistical Pages project (http://www.statpages.net/) and the Department of Statistics at Carnegie Mellon University (lib.stat.cmu.edu). Other Web sites offer tools and techniques for decision analysis, discrete event simulation, forecasting, Web data mining and more. This means that OR experts can be more productive than was possible prior to the Web. It also means that professionals in fields adjacent to OR (like mathematics and engineering), and in fields that use OR technology extensively (like financial engineering and industrial engineering) now have convenient online access to many OR tools. Moreover, thanks to the Internet, OR consultants can work more easily than ever with distant clients. Bottom line: if you can't lure OR experts to your staff, seek help in related fields and the OR consultancies. - Arthur Geoffrion

References

1.

M.R. Sanders, 4/18/00, "Global eCommerce Approaches Hypergrowth," Brief, Forrester Research, Inc. S. Williams, September 1999, "Post-Web Retail," Report, Forrester Research, Inc. S.J. Kafka, February 2000, "eMarketplaces Boost B2B Trade," Report, Forrester Research, Inc. M. Putnam, January 1999, "Business Services On The Net," Report, Forrester Research, Inc. E. Feitzinger and H.L. Lee, 1997, "Mass Customization at Hewlett-Packard: The Power of Postponement," Harvard Business Review, January-February, pp. 116-121. G. Lin, M. Ettl, S. Buckley, S. Bagchi, D.D. Yao, B.L. Naccarato, R. Allan, K. Kim and L. Koenig, 2000, "ExtendedEnterprise Supply-Chain Management at IBM Personal Systems Group and Other Divisions," Interfaces, Vol. 30, No.1 (Jan-Feb). See also the sketch of this and another supply chain project at IBM in B. Dietrich, N. Donofrio, G. Lin and J. Snowdon, "Big Benefits for Big Blue," in this issue.

2. 3.

4. 5. 6. 7. 8.
9.

D. Weigel and B. Cao, 1999, "Applying GIS and OR Techniques to Solve Sears Technician-Dispatching and HomeDelivery Problem," Interfaces, Vol. 29, No. 1 (Jan-Feb). T. Cook, 1998, "Sabre Soars," OR/MS Today, June 1998 (http://www.lionhrtpub.com/orms/orms-6-98/sabre.html). See also the update on the Sabre story in this issue. M. K. Geraghty and E. Johnson, 1997, "Revenue Management Saves National Car Rental," Interfaces, Vol. 27, No. 1 (January). See, for example, the press releases of PROS Revenue Management, Inc. (http://www.prosrm.com/), Sabre Inc. (http://www.sabre.com/), and Talus Solutions (http://www.talussolutions.com/). A. Montgomery, 2000, "Applying Quantitative Marketing Techniques to the Internet," forthcoming, Interfaces, Vol. 30, No. 6 (November-December). W. Hanson, 1999, "Internet Marketing," South-Western College Publishing.

10. Marketing Science and the Internet, 2000, special issue of Marketing Science, Vol. 19, No. 1 (Winter). 11. A. Geoffrion and R. Krishnan, 2000, "OR in the E-Business Era," forthcoming, Interfaces, Vol. 30, No. 6 (NovemberDecember).

12. P. Horner, 2000, "The Best of Times," OR/MS Today, February (http://www.lionhrtpub.com/orms/orms-200/horner.html).

55

Arthur Geoffrion is the James A. Collins Professor of Management at UCLA's Anderson School. After working for many years on optimization and its applications to supply chain management and other areas, and then on the foundations of computer-based modeling environments, his current interests focus on e-business. He is a member of the National Academy of Engineering and a recent president of INFORMS.

Knowledge Base
The Science of Smart Decisions (ORMS Today, June 2000, Volume 27, Number 3)

The Science of Smart Decisions (ORMS Today, June 2000, Volume 27, Number 3)
Published by Harlan P. Crowder on 30/01/2007

How to make operations research a core business competency at your company The Science of Smart Decisions Making operations research a core business competency By Harlan P. Crowder

Operations research and management science involve using mathematical modeling and statistical methods to help people analyze complicated business processes and make good decisions. These techniques are especially useful for helping understand and deal with business complexity and uncertainty. In the 21st century business climate, complexity and uncertainty are at an all-time high: the electronic economy requires managers and executives to make better and faster operational and strategic decisions; globalization and the Internet are shifting and redefining relationships with customers, suppliers, partners and competitors; and the lowest unemployment rate in two generations is taxing the ability and imagination of companies to find, train and retain workers and managers. In this fast-paced and hectic environment, the role of operations research (OR) is increasingly viewed as a vital and necessary

56

function for helping analyze and run a business. Few CEOs today would try running a Fortune 500 company without a product design or strategic planning organization. In the future, OR expertise and technology will be standard components of successful enterprises for helping managers and executives understand complicated business issues and make smart decisions. The question is, What should be the relationship of OR capability and expertise within an enterprise? Should the company outsource OR requirements to consultants or business partners? Or should this capability be a core competency, with OR expertise residing within relevant company organizations? The objective of this article is explore the various roles of OR within a company, outline conditions that indicate when OR expertise should be integrated into the enterprise, and give some guidelines about how to start building OR capability in your company. The Role of OR in 21st Century Enterprises

Operations research methods and technologies have traditionally played an important role in business areas such as supply chain planning and logistics network design and operation. In the future, OR will help companies deal with a broad range of new business challenges. The increased complexity of running a successful business. Many large companies with complex business processes have used OR for years to help executives and managers make good strategic and operational decisions. American Airlines and IBM have incredibly complex operations in logistics, customer service and resource allocation that are built on OR technologies. As the trend of increased business complexity moves to smaller enterprises, OR will play vital operational and strategic roles. New opportunities in the electronic economy. The Internet and telecommunications revolutions will increase the requirement for OR analytic decision tools. The idea of a static supply chain - materials supply, product manufacturing, multichannel distribution and product sales - will give way to customized, one-off, built-on-the-fly virtual supply chains. The necessary components will be brokered and assembled as needed, the configuration, functionality and price dictated by application requirements and the market. Lots of information, but no decisions. Enterprise resource planning systems and the Web have contributed to a pervasive information environment; decision-makers have total access to every piece of data in the organization. The problem is that most people need a way to transform this wealth of data into actionable information that helps them make good tactical and strategic decisions. The role of OR decision methods is to help leverage a company's investment in information technology infrastructure by providing a way to convert data into actions. The role of OR is continually expanding with new and innovative applications. For example:

A leading provider of broadband telecommunications services is designing and installing a nationwide fiber optic network for carrying digital information. They foresee the day when digital communication will be too cheap to meter; the company's business will involve selling services around the communications capacity. This company is using OR methods for planning how to build and allocate resources for serving new communications-based businesses and communities that will not even exist for another five or 10 years. A large nationwide bank is using OR techniques to configure complicated financial instruments for their customers. A process that previously required a human agent and took minutes or hours to perform is now executed automatically in seconds on the bank's intranet. And the resulting financial products are far superior to those produced by the manual process. A major retail enterprise is using OR methodology for making decisions about customer relationship management. They are using mathematical optimization to achieve the most profitable match between a large number of customer segments, a huge variety of products and services, and an expanding number of marketing and sales channels such as traditional direct mail, call centers, narrowcast TV, Internet and e-mail.

OR expertise: Outsource or Build into the Business?

The standard management mantra is, "If it's not a core competency, outsource it." So the question is, should OR be considered a core competency in your business and, therefore, you need to build up the technology and expertise? Or can it be outsourced to a consultant or business partner? This complicated issue can have strategic implications. In order to gain some perspective, here are several questions that will help determine the level of OR requirements for an organization or company.

57

What will your company look like in three years? Pretty much like today? Or will it be unrecognizable to today's employees, customers and shareholders?

Several years ago, the red-hot management and consulting buzzword was "Business Process Reengineering." We don't hear much about BPR anymore because, for many companies, it has become a way of life. Many organizations and enterprises are continually reinventing and reengineering themselves in response to new technologies, customers and markets. A company undergoing continual organizational and business flux should have OR as a core competency. Think of OR as a wind tunnel for business: it lets you experiment with business ideas and designs, using computer models to inexpensively "test fly" new processes in the laboratory before they are committed to the expensive and unforgiving real world. A company today that anticipates having the same organization, market strategy and business model for the foreseeable future probably doesn't need OR as a core competency. It needs help in planning its bankruptcy strategy. Is your company rapidly expanding its operations, marketing and sales channels, adding new touch points with customers, suppliers and partners? Or is the telephone and fax machine going to be fine for now?

The Internet and other new technologies have significantly expanded our ability to conduct business in new and exciting ways. New partners - and competitors - are popping up in places you didn't know existed, let alone how to pronounce. Electronic disintermediation is reducing the friction of commerce and squeezing out the middleman; great news, unless of course, you happen to be the middleman. But this shift to wider business bandwidth has a problem: which combinations of new communications channels are best for customers, suppliers and partners? Especially for marketing and sales, the promise of lowcost touch points is compelling, but how should these new tools be combined and used over time in order to maximize the profit potential? We are seeing a wide array of new OR applications that focus on how to use new business methods and technologies. Applications such as brokering, auctions, dynamic pricing, horizontal marketing and the best use of electronic commerce portals all have underlying mechanisms that can be better understood and made more effective and efficient using OR analysis and techniques. If your company is expanding its reliance on new mediums for connecting with your customers, suppliers and partners, then you will increasingly benefit from the analysis and insight offered by OR technology and expertise. Are you an "Old Economy" 20th century company struggling to adapt to the 21st century "New Economy"?

There is lots of attention these days on the bright stars of the New Economy, the purveyors of information that deliver precisely ordered streams of electrons to their customers. They don't need heavy machinery, trucks and fuel, and big warehouses for their supply chains. Inexpensive plastic disks or, increasingly, a direct link to the Internet, is the only element of their distribution channel between enterprise and customer. But most companies are Old Economy; they still design, build and deliver physical, molecular-based products to customers. At the end of the day, they still must build volume-filling commodities at a geographic location, and move products from point A to point B with trucks, trains, planes and ships. These enterprises still have planning, scheduling and logistics problems, and no technological breakthrough will change that fact. The good news is that Old Economy companies are rapidly adopting the electronic and information technologies of the New Economy for making operational and strategic decision-making more efficient and effective. Combining OR technologies with electronic trading communities and Internet-based enterprise planning systems are revolutionizing the way producers and distributors do business, blurring the value boundaries between manufacturers, suppliers, customers and partners. Skills and Talents of an OR Practitioner

Regardless of the organizational positioning of OR in a company - in-house as part of the enterprise or outsourced to consultants or business partners - the people that practice the science and craft of OR have distinctive knowledge and talents. The three discussed here are: 1. the ability to build computer-based OR business models; 2. the need for business and application domain knowledge; and 3. good communication skills. Operations research is all about building mathematical models. A good practitioner must be able to understand the characteristics and attributes of a complicated business process and then have the ability to abstract and translate the salient points into a

58

mathematical optimization or simulation model. Most people with a basic understanding of high school mathematics can be trained to transform business objectives and processes into a mathematical framework. A good OR practitioner also knows how to separate important aspects of a business process from the irrelevant distractions. Operations research is a horizontal problem-solving technology that is applicable to a wide range of processes within a business, and cuts across many industries such as manufacturing, transportation, finance and health care. No one person knows the details of all business processes that are applicable for OR applications. Therefore, for a good OR person to be effective, he or she must be a quick study in learning the details of a business. Knowing the business on Day One is not a requirement; having the ability to quickly "get it" is. It is always surprising how OR techniques and models that a practitioner has used in one area of application will reappear in an entirely different industrial application setting. Even though an OR expert may not know your company's business in detail, he or she may have sophisticated tools and techniques that are applicable for helping you solve complex and difficult problems. Finally, a good OR practitioner needs to know how to communicate with a diverse range of people within an enterprise. The guy on the shop floor may have insights that can be used by the OR expert to build the right business model; the COO needs an explanation of the fine points and nuances of nonintuitive but important results computed by an OR application. Practicing good operations research means practicing good communications. Sources of OR Talent for Your Company

The channels for finding the right OR candidates for your company include:

The Institute for Operations Research and the Management Sciences (INFORMS) is the primary professional organization for OR practitioners. They have an effective program for helping link OR experts with prospective employers. For more information, see http://www.informs.org/. Some professional recruiters specializing in technical fields have expertise in helping place OR practitioners. For example, Smith Hanley Associates and Analytic Recruiting Inc., both in New York, have recruiters who understand the requirements of companies seeking OR expertise. (See http://www.smithhanley.com/ or http://www.analyticrecruiting.com/) INFORMS also has information on professional recruiting organizations. Finally, a good way to find an operations research specialist for your company is to hire an OR consultant who has good contacts in the OR community. The consultant can quickly determine the range and level of problems that need to be addressed in the enterprise, and then help you find the right person to solve problems and is compatible with your organization.

Using Science to Help Make Decisions

At the end of the day, managers and executives still make the decisions. The role of OR is to augment the decision-making process with an analytic framework for helping deal with complexity and uncertainty. In the 21st century, no aircraft manufacturer would dream of bringing a new aircraft to market that had not been modeled and simulated on a computer. Similarly, no business executive should dream of making a strategic decision with major impact on the corporation until that decision has been modeled and validated using operations research technology.

Harlan P. Crowder (harlan_crowder@hp.com) is a Senior Scientist at Hewlett-Packard Laboratories in Silicon Valley, Calif.

Knowledge Base
The Study of Transportation is Paved with Science (ORMS Today, August 2000, Volume 27, Number 4)

The Study of Transportation is Paved with Science (ORMS

59

Today, August 2000, Volume 27, Number 4)


Published by Randolph W. Hall on 30/01/2007

Human curiosity and the desire to explain how the world around us behaves drive a fertile application area of operations research The Study of Transportation is Paved with Science Human curiosity and the desire to explain how the world around us behaves drive a fertile application area of operations research By Randolph W. Hall

In operations research, it is not unusual to see the word "science" affixed to a discipline, as in management science, manufacturing science or organizational science. Cynically, one might view this as clever marketing, for science is often equated with goodness, quality or purity. And try as we may to make OR scientific, it certainly is not all that the profession is about. A good practitioner of "management science" needs many skills and talents, most of which have little to do with science. Cynicism aside, a scientific thread does seem to underlie operations research. More than a century ago, Stanley Jevons wrote in "The Principles of Science": "The whole value of science consists in the power which it confers upon us of applying to one object the knowledge acquired from like objects." [1] By aspiring to do just this - to understand, design and operate systems in a manner that gains knowledge from like systems - we behave scientifically. The "Thought-Chain of Science"

Jevons' insights provided motivation for "The Handbook of Transportation Science," recently published by Kluwer. [2] The premise for our book is that transportation can be defined as a scientific discipline that transcends transportation technology and methods. Whether by car, truck, airplane - or perhaps a mode of transit not yet conceived - transportation obeys fundamental properties. The science of transportation defines these properties and demonstrates how knowledge of one mode of transportation can be used to explain the behavior of another.

60

Like any of the natural sciences, transportation science as a discipline arose out of human curiosity and the desire for explanations for how the world around us behaves. In the words of famed physicist Max Planck, "The beginning of every act of knowing, and therefore the starting point of every science, must be in our own personal experience ... They form the first and most real hook on which we fasten the thought-chain of science." [3] And so is the case for transportation science. When we look back to the earliest publications on the subject from the 1950s and early 1960s, we see first a desire to understand the dynamics of roadway traffic. Then and now, there is hardly a person in the profession who does not view a trip on the highway as a scientific experiment, seeking to understand why traffic flows as it does, how bottlenecks appear and disappear, and what causes the myriad of driving behaviors. Transportation Phenomena

Transportation scientists are motivated by the desire to explain spatial interactions that result in movement of people or objects from place to place. Transportation science's heritage includes research in the fields of geography, economics and location theory. Its methodologies draw from operations research, probability and control theory. It is fundamentally a quantitative discipline, relying on mathematical models and optimization algorithms to explain the phenomena of transportation. Transportation science also draws from the natural sciences, for transportation does not just appear in human-built systems. Transportation naturally occurs in blood circulation, bird migration, ant navigation, rivers and currents, atmospheric flows, refraction of light, orbits and animal territories. Long before humans began inventing technologies to facilitate transportation, the world was a dynamic place with objects and organisms in constant motion, not just obeying the laws of physics, but also obeying principles of intelligent transportation design. Many early transportation researchers were, in fact, trained in natural sciences, and cleverly combined knowledge of natural phenomena, such as thermodynamics and fluid mechanics, with their observations on traffic flow. Transportation science recognizes that all modes of transportation have the same essential elements - vehicles, guideways and terminals - operating under some control policy. Vehicles comprise mobile resources that accompany persons or shipments (P/S, discrete or continuous) as they travel. They provide the motive power to propel P/Ss on their trips, and the carrying space to ensure a safe and comfortable journey. Guideways are stationary resources that define feasible paths of travel and provide the physical infrastructure to support vehicles and P/Ss. They add safety by restricting movements to defined paths, and they provide an efficient surface for movement. Terminals are stationary resources that reside at discrete locations. They offer the capability to sort vehicles, persons and objects among incoming and outgoing transportation routes. Lastly, control represents the rules, regulations and algorithms that determine movements and trajectories within transportation systems. Evolution of Transportation

Many years ago, transportation occurred by human, animal and natural (e.g., wind, currents, gravity) power, in simple vehicles (or none at all), on guideways that required little construction. Terminals, if they could be called that, were market towns, caravansaries or trading posts, and control was executed through the minds of individual travelers. By contrast, today most movement depends on propulsion by motors or engines, built guideways and terminals, and, to some degree, computer control. Supporting communication technology is also undergoing rapid change, through Internet purchasing, wireless data communication and mobile computing. So in many respects, one might say that transportation of the late 21st century has little in common with its ancestors. Nevertheless, similarities abound. For any given mode of transportation, vehicles, guideways, terminals and control are configured to perform several basic functions. All modes provide the capability to propel, brake and steer. Most (even animal and human) provide mechanisms to store energy for propulsion, to sort persons and objects at terminals, to couple shipments together into efficient loads, and to contain these shipments as they travel from place to place. How a mode of transportation accomplishes these functions may be unique, but the basic tasks are the same. [4] Branches of Transportation Research

Transportation science in part describes how humans and systems behave when making transportation decisions, and in part prescribes how decisions ought to be made when optimizing a transportation objective. On a day-to-day basis, individuals are

61

presented with a plethora of transportation choices, some of which are determined by ingrained habits and circumstances; others of which result from deliberation. At the most routine level, driving behavior is reflected in a continuous stream of decisions defining speed and direction of travel. The route followed, time of travel and, to some degree, the choice of destination and mode are all daily decisions, constituting short-term traveler behavior. These decisions are imbedded within the broader context of how we plan and organize our activities, constituting long-term behavior. Where we reside and where we work, and how human activity is organized in built environments (cities, towns, residential developments, business districts, etc.) are examples of human decisions with long-term consequences. Collectively, transportation behavior constitutes one of the main branches of research in transportation science. Another branch of transportation science focuses on flows and movement along guideways. The essential characteristic is that interactions occur among entities, both along the path and at points where paths intersect, split or combine. As flow rates increase, speeds tend to decline, density tends to increase and queueing may occur. These phenomena have been studied in great depth in the traffic flow literature, which is foremost descriptive in nature. Control policies for flow networks, on the other hand, are prescriptive. They are used to optimize movement along guideways. Policies can include localized controls, regulating trajectories of individual vehicles; segment based controls, regulating groups of vehicles passing through intersections or segments; or global controls affecting entry or exit to/from the network or network routing. Routing - optimizing the path(s) followed by entities as they move from place to place - is also an area of prescriptive transportation science. The three basic tasks of routing are assignment (determining which resources perform which pieces of work), sequencing (the order in which work is completed) and navigation (the path followed from one assignment to the next). Routing methods are needed not just for vehicles but for all types of mobile resources, including containers, trailers and crews that operate the vehicles. Managing the flows of resources, while satisfying constraints on work rules, is one challenge in vehicle routing. One more area of prescriptive transportation science is design, including network design and location. There are natural patterns to how a network should be constructed. A tree-like structure is found in most naturally occurring transportation networks, such as rivers, blood circulation and plants, as well as many distribution and supply networks. But human-built transportation networks tend toward a denser structure, offering redundant paths, while nevertheless following familiar patterns such as grid, ring/radial or hub-and-spoke. Design of these networks to facilitate efficient movement forms another body of research. Application or Theory?

Perhaps the simplest - and most confusing - way to classify research is to place it in one of two categories: "applied" or "theoretical." In OR, the latter inevitably seems to begin with a set of mathematically stated assumptions and leads to proof of theorems that are derived from these statements. Applied research, by contrast, begins with a problem statement (presumably representative of a real organization), and proceeds to a solution through the application of known algorithms or other mathematical techniques. Examples of "theoretical" and "applied" research exist in transportation science. Clear evidence of the applicability of operations research and management science to transportation can be seen in past issues of OR/MS Today, and plenty of transportation related theorems can be found in other INFORMS publications. Yet following the simple definitions, most transportation research could only be classified as "neither." While one might say transportation is by definition an application (it is certainly not an abstraction), a grounding in the real world should not preclude theory. Physics certainly is not an "applied science," even though it is a real-world application of mathematics. It is just as natural to be a theoretical transportation researcher as a theoretical physicist. One strength of transportation science is the relatively high status given to empirically based research and to theories induced from data on real-world phenomena (e.g., theory without provable theorems). In this way, transportation is different from some OR disciplines, perhaps because "the problem" is not always the centerpiece of the research. Transportation phenomena can be studied without having the goal of optimizing some objective function, and without the intention of serving the business needs of a private company. Another likely reason is that data are more readily available for transportation than for other phenomena studied in OR. Also, transportation simply invites observation. Nearly everyone experiences transportation, and thus has the ability to form empirically founded theories for its behavior. It stimulates, in Max Planck's words, the "thought-chain of science" through personal experience. On the other hand, transportation does not lend itself to as high a degree of precision as other parts of OR. Theories are tested,

62

but they are less likely to be "proven," and they seldom predict with total accuracy. But proof and total accuracy are not demands of science; they are demands of mathematics. As philosopher Karl Popper stated: "Science never pursues the illusory aim of making its answers final, or even probable. Its advance is, rather, towards the infinite yet attainable aim of ever discovering new, deeper and more general problems, and of subjecting its ever tentative answers to ever renewed and ever more rigorous tests." [5] Transportation offers an example of how OR can be used to build a lasting body of scientific knowledge centered on real-world phenomena. It should be no surprise that transportation is also one of the most fertile application areas of operations research. It is our fundamental understanding of transportation, through scientific research, that has allowed us to make transportation better. This is the promise for operations research. References

1. 2. 3. 4. 5.

Jevons, W.S., 1958, "The Principles of Science," Dover Publishing, New York, p.1. Hall, R.W. (editor), 1999, "Handbook of Transportation Science," Kluwer Academic Publishers, Norwell, Mass. Planck, M., 1932, "Where is Science Going," W.W. Norton and Company, New York, p. 66. Hall, R.W., 1995, "The architecture of transportation systems," Transportation Research, C, 3, pp. 129-142. Popper, K.R., 1959, "The Logic of Scientific Discovery," Basic Books, New York, p. 281.

Randolph Hall is a professor of Industrial and Systems Engineering and director of the METRANS Center at University of Southern California. He is the editor of "Handbook of Transportation Science" and author of "Queueing Methods for Services and Manufacturing."

Knowledge Base
Disruption Management (ORMS Today, October 2001, Volume 28, Number 5)

Disruption Management (ORMS Today, October 2001, Volume 28, Number 5)


Published by Jens Clausen, Jesper Hansen, Jesper Larsen and All on 30/01/2007

63

How OR can get interrupted operations back on track fast Disruption Management Case Studies from airline, shipbuilding industries show how OR can get interrupted operations back on track - fast. By Jens Clausen, Jesper Hansen, Jesper Larsen and Allan Larsen

A passenger aboard a Boeing 747 from New York to London suddenly looses consciousness. Fearing the passenger may be having a heart attack, the captain decides to divert to Gandor to get immediate help. A delay of the planned arrival at Heathrow is unavoidable, but the airline's Operations Control Center (OCC) takes no action because heavy air traffic over London is delaying flights anyway. While performing the necessary checks before take off from Gandor, the captain discovers that one of the checks fails. Normally, this would not pose a severe problem, but the required technical expertise is not present at Gandor, and now the situation turns into a serious delay. The disruption from the planned schedule will affect passengers as well as the next planned activity of the aircraft and the crew. The disruption can be solved in various ways. One solution: fly in the necessary personnel to Gandor to cope with the check of the aircraft. However, this gives rise to an overnight stop, and the passengers need accommodations. Unfortunately, there are a number of first-class passengers aboard the plane who are granted 5-star accommodations in such a situation, and such accommodations are not available in Gandor. Thus, the solution is not feasible. The airline opts to hire a Boeing 747 from another airline, fly it to Gandor to pick up the passengers and continue to Heathrow. This constitutes a very expensive solution. In addition, the airline is left with the problem of getting crew and aircraft back to their planned activities as quickly as possible - not an easy task. Is there a better solution to the problem? Operations research methods have a proven track record of delivering high-quality solutions to a vast range of planning problems, most notably in the airline industry, production management and logistics. For more than a year, researchers at the Department of Informatics and Mathematical Modelling at the Technical University of Denmark have been working on applying OR in a new, exciting field: disruption management. In this context, a disruption is defined as a situation during the operation's execution in which the deviation from plan is sufficiently large that the plan has to be changed substantially. The plan produced by OR-based decision support can be applied on the day of the disruption, it can be adjusted to take last-minute changes into account, or it can produce alternative plans well ahead of potential problems.

64

The disruption described above was serious enough for those passengers involved, but it was not a major disruption. Major disruptions are closure of airports or airspace due to snow storms, strikes or - as was the case with the recent terrorists attacks in the United States - events that are beyond comprehension. Of course, costly disruptions are not limited to the airline industry. In the shipbuilding industry, for example, the just-in-time approach to production gives rise to an increased demand for robustness in plans and calls for enhanced tools to handle disruption situations. Odense Steel Shipyard - a major shipyard in Denmark - assembles ships in a large dock utilizing a gigantic portal crane as the prime tool. During December 1999, Denmark was hit by the worst hurricane ever recorded. The hurricane blew the OSS portal crane into the dock where a ship was under construction. The disaster immediately closed down production in the dock and disrupted the shipyard's activities for several months. The Process Cycle of an Operation

The airline and the shipbuilder are involved in different activities, but in order to carry out their daily operations they both produce a plan. As the date of the particular operation approaches, the plan is adjusted to take into account changing circumstances. This is typically called the tracking process. On the day of operation, the plan is implemented, and the operation is monitored during execution. What happens when the observed situation deviates from the planned situation? If the deviation is marginal, no immediate action may be required in order to continue the operation. If the impact of the deviation on the operation is substantial - either because the current plan becomes infeasible or because the cost or benefits of running the operation according to the current plan changes - a disruption has occurred. In order to continue operations, intervention is necessary to resolve the infeasibilities resulting from the disruption or to decrease costs or increase revenues. The monitoring and re-planning process is referred to as the control process. As opposed to the tracking phase, the time for replanning in the control phase is so limited that the methods used for generating the original plan cannot be used. In Figure 1, the three processes are shown in the context of the daily operation of an airline company.

Figure 1: The timeline for the daily operation of an airline. A disruption is not necessarily the result of one particular event. For efficient disruption management, the status of the entire system forming the basis for operation is monitored. The process cycle of an operation consists of three elements:

planning, where resources for executing the operation are assigned to specific activities, tracking, where changes in the resource situation are monitored and evaluated, and re-planning is done off-line, control, where changes are monitored, but re-planning is performed online.

Today, the disruption management process generally lacks computerized decision support. As a consequence, decision-makers often stop after having generated a single feasible option for recovery; time simply does not allow for the generation of structurally different alternatives. Often, even simulation tools allowing a "what-if" analysis of the current situation are not available. In the area of disruption management, OR can make a significant difference in the way that operations are recovered and the quality of the recovery. Disruption Management in Action

65

Case 1: Managing steel plates. The CIAMM project is a collaboration between DTU, the University of Aalborg and a number of industrial companies, including the Odense Steel Shipyard. The shipyard builds the largest container ships in the world. Ships are built in an assembly line fashion (i.e., several ships are under construction at the same time in different workshops at the shipyard). Hence, it is critical that delays are minimized in each workshop since a delay in one workshop influences the whole production. Each workshop maintains its own planning unit, while an overall planning unit is responsible for coordinating the flow between workshops. The first station in the production of a ship is the steel plate storage where the raw material for the ship is delivered. The steel plates arrive by ship in large bulks, each bulk containing plates to be used for different components and at different times. The plates are stored at an outdoor field with an 8-by-32 grid of stacks until they are requested by the cutting workshop. Each stack contains 20 plates on average. The plates within each stack may vary in size. The plates are stored and retrieved by two portal cranes running on the same pair of tracks. The cranes cannot pass each other. The plates are delivered in one end of the storage and are handed over to the cutting process at the other end. The organization is illustrated in Figure 2.

Figure 2: A 4-by-8 steal plate storage area with two cranes. At present, the storage is managed using a so-called block-oriented approach in which steel plates to be used in the same section of a ship are stored together. However, there are not enough stacks so that each section can have its own. In addition, the plates often arrive weeks or even months prior to the planned use date. The topmost plates in a stack are often not the first to be used, and hence have to be moved in order to get to the relevant plates. The goal of the project is to investigate alternative approaches to storage organization in order to minimize dig-up moves, taking into account that the planned sequence of plates to be delivered from storage often changes due to urgent deliveries. The project team is investigating two possible organizations: the time-slot organization and the self-adjusting organization. In timeslot organized storage, plates are arranged according to their planned use date. The self-adjusting organization determines the location of each new plate and the location of plates moved in dig-up moves based on the current status of the storage and the knowledge of future demands. In both cases, the quality of the solution as well as the sequencing of the cranes in order to avoid collisions are determined by simulating the activities of the storage for the rest of the day. There are at least two approaches to disruption management in the daily operation: the control approach and the re-planning approach. In the control approach, the storage and the cranes are continuously monitored, and the next activity of each crane is decided based on the current status without regard to upcoming activities. Clearly, no efforts are wasted on planning for situations that do not occur. On the other hand, due to the limited time horizon, suboptimal decisions are bound to occur. The alternative strategy is a re-planning approach. Here, a detailed plan based of the expected production of the coming day(s) is constructed prior to the day of operation, and the operation is run according to this plan. In a deterministic world, an optimal operation results. If disruptions occur, however, some mechanism is needed to take care of recovery.

66

Recovery is possible either by online re-planning or by building buffers into the original plan. Building buffers is the current practice of OSS. However, this leads to costly inefficiency. Re-planning without buffers, on the other hand, is dangerous since delays in one workshop will immediately affect the flow through the complete system. The CIAMM project partners have developed a planning tool for the time-slot organization and the self-adjusting organization. The running time of this tool is sufficiently short that it may also be used in a disrupted situation to re-plan activities. The tool is based on the heuristic method simulated annealing, in which each suggested new plan is evaluated through simulation. Simulation plays a crucial role in the project since it also provides the interface to the end-users. This approach has been necessary for two reasons: the constraints of the problem are difficult to handle in classical mathematical models (e.g. the cranes cannot pass each other), and the evaluation of a re-plan when disruptions are taken into account is by no means obvious. Results so far indicate the time-slot organization and the self-adjusting organization of the steel plate storage seem to be superior to the block-oriented organization. In a number of generated scenarios without disruptions, the number of dig-up moves was reduced by 60 percent compared to the block-oriented approach. For scenarios with disruption, similar experiments indicate that the self-adjusting organization is more robust to disruption than the time-slot oriented approach, and that the savings in terms of dig-up moves are comparable to those for the non-disrupted situation. The operation time is reduced 40 percent compared to the block-oriented approach. Case 2: Holistic approach to airline delays. With more than 22,000 commercial flights each day in the European airspace and the control spread over more than a dozen national air controls, there are plenty of reasons for disruptions in air traffic. In addition, airlines regularly face restrictive weather conditions, maintenance problems and staff shortages. As a result, one out of four European flights was delayed by more than 15 minutes in the first quarter of 2000. The DESCARTES project, financed partly by the European Union, includes partners DTU, British Airways and Carmen Systems. The project aims at developing decision-support tools for airline disruptions on the day of operation. Currently, plans are made for aircraft, flight crews and cabin crews based on an airline's schedule which is determined at least six months prior to the actual operation date. Making such a plan is complicated for several reasons: aircraft maintenance rules have to be taken into account, the right capacity must be at the right place at the right time, and the characteristics of each airport have to be respected. Crew scheduling has to consider international and national rules regulating flying time, as well as individual airline agreements with unions. The plans for crew assignments, aircraft assignments and maintenance are handed over from the planning department to the OCC a few days ahead of the operation date. Deadlines differ for different resources. For example, the plans for short-haul aircraft are handed over one day before operation, while long-haul plans are handed over five days in advance. As the plan is handed over, it becomes the responsibility of the OCC to maintain the resources so that the flight plan is feasible even if crewmembers get sick or flights arrive late. The OCC concerns itself with not only the immediate situation, but also the knock-on effects on other parts of the schedule since flight crews, cabin crews and aircraft are not planned as a unit. Producing recovery plans is a complex task, as many resources (crew, aircraft, passengers, slots, catering, cargo, etc.) have to be re-planned. When disruption occurs on the day of operation, large airlines usually react by solving the problem in a sequential fashion: aircraft, crew, ground operations and passengers. Sometimes, the process is iterated with all stakeholders until a feasible plan for recovery is found. Like many airlines, controllers at British Airways performing the recovery have little computerized decision support to help construct high-quality recovery options. Since it is time consuming, complex work to build a recovery plan, the controllers are often content with producing only one viable plan. Furthermore, the controllers have little help in estimating the quality of the recovery action they are about to implement. One recovery option that is almost always available is cancellation of flights or round trips. From a resourcing perspective, cancellation is ideal; it requires no extra resources and may even result in new, free resources, and little re-planning is required. However, from the passenger side, it is the worst option, since they don't get where they want to go. Determining the quality of a recovery option is (as was the case for the steel plate storage) difficult. The objective function is composed of several conflicting and non-quantified goals. The project aims at developing better support for airline operations problems. There are already systems on the market that in a disruptive situation can help airline controllers resolve disruptions. However, to the best of our knowledge, these systems only consider one resource at a time (e.g. cabin crew). With DESCARTES, we aim to develop an integrated approach that can deliver decision support for several resource areas that takes the highly complex interaction between the areas into account. At present

67

the focus is on four resources: aircraft, flight crew, cabin crew and passengers. DTU and Carmen Systems developed new optimization methods for this highly time-constrained problem. The disruption management system is built around an infrastructure, "the Umbrella," which facilitates message-passing between the different stakeholders of the process (the managers of flight and cabin crew, aircraft and passengers) and underlying systems performing the actual computations leading to recovery options for the current situation. The team has developed systems for crew recovery and aircraft recovery, and now we're working on systems integrating the recovery of different resources. In parallel, the team has developed two simulators: a consequence analyzer that walks through the rest of the day given a suggested option for a disruption and its knock-on effects, and a stochastic simulator that allows strategic analysis of different overall strategies in disruption handling. Alerting mechanisms will be included in the final system because a disruption is not necessarily the effect of one particular event; it may be the result of a series of smaller events each of which by itself is not serious. For example, when a single crewmember calls in sick it is not serious, when many crewmembers call in sick on the same day it can result in a major shortage of staff. The project, currently in its second year, has produced prototypes that are being tested on real-data in a closed environment. Later this year, the systems will be tested with respect to speed and option quality in a simulated online environment, again with real data. With this project, it's crucial to have tools that allow the staff in the production environment to view and investigate the suggested solution options. The consequence analyzer is valuable as a stand-alone tool, since it allows the decision-makers to simulate the effect of potential decisions and to develop a better understanding of the effect of different types of strategies (avoid cancellations by all means, return to plan as fast as possible, never leave any problems to the next day, etc.). Conclusion

Disruption management is an application area for OR that has huge potential and which offers substantial gains in efficiency for the users involved. Applications range from industrial companies to the public sector (see box). Solution methods must be able to produce good and structurally different solutions fast due to the online flavor of the problems. Thus, the technical challenge is to develop methods that produce robust and near-optimal solutions fast for real-life problems. Even with the tremendous development in the field of heuristics, this is by no means a trivial task.

Other Applications

Container traffic: Just as airlines are allotted slots for their aircraft at the airports, container ships are assigned slots at container terminals. Resources like stacker cranes and re-supply trucks are planned on the basis of when the container ship is scheduled to arrive and depart. If a ship is delayed, a number of resources have to be re-planned. The shipping company may even want to redirect a ship to another port, thereby saving time or reestablishing a profitable/sensible plan. Network operation: Telecommunication companies sell communication bandwidth in point-to-point connections to users. Whenever there is an equipment failure, action has to be taken. If the situation requires something other than automatic rerouting, a disruption has occurred and a disruption management tool could be useful. Substitution handling in primary schools: The goal of the Danish primary schools is to educate children, but for the smaller children, the school also has a daycare function. Hence, situations where staff report sick or are otherwise away from the school have to be handled by substitutes. This process typically has to take place on short notice.

Jens Clausen (jc@imm.dtu.dk), Jesper Hansen, Jesper Larsen and Allan Larsen are researchers in the Department of Informatics and Mathematical Modelling at The Technical University of Denmark.

68

Knowledge Base
Trying to Capture Dynamic Behavior (ORMS Today, April 2002, Volume 29, Number 2)

Trying to Capture Dynamic Behavior (ORMS Today, April 2002, Volume 29, Number 2)
Published by Laureano F. Escudero on 30/01/2007

A whirlwind tour of industrial applications of mathematical programming Trying to Capture Dynamic Behavior A whirlwind tour of industrial applications of mathematical programming by Laureano F. Escudero

This article presents a set of real-life industrial applications of mathematical programming that go from LP and 0-1 programming to combinatorics, network optimization, nonlinear optimization and stochastic programming in the broad area of supplying, production, allocation, distribution, scheduling and dynamic planning. The application cases belong to the strategic, tactical and operational domains. Description of Applications

Open Market Electric Generation Allocation. The pace of deregulation and the introduction of competition into the energy industry are accelerating globally. The main objective of the electricity market deregulation all over the world is to decrease the cost of electricity through competition. This is achieved through radical changes in the market and regulatory structure, such as the "unbundling" of functions (separation of generation, transmission and distribution segments) and the creation of bid-based electricity markets (see [15] and [19]). One of the tools that can be used in this new environment is a modeling and algorithmic framework for robust simulation of multiperiod hydrothermal power management under uncertainty. The uncertainty involves generators' availability, fuel procurement, transport and stock costs, exogenous water inflow at river basins and energy demand along a given time horizon. Very often there are thousands of constraints and variables for deterministic situations. Given today's optimization state-of-the-art tools,

69

deterministic models should not present major difficulties for problem solving, at least in small environments. However, since the pioneering work of Martin Beale and George Dantzig in the mid-1950s, researchers have recognized that traditional deterministic optimization is not suitable for capturing the truly dynamic behavior of most real-world applications. A better approach for such situations is to employ two-stage scenario analysis in which an electric generation decision policy, for example, can be implemented for a given set of initial time periods. The solution for the other periods need not be anticipated since it depends on the scenario to occur (see [14]). The "dualization" of the coupling constraints for the splitting control variables of the last period from the first stage results in a quasi-separable Lagrangian function in which Augmented Lagrangian Decomposition (ALD) schemes can be used. (A multi-stage nonlinear stochastic network approach for hydropower generation optimization is described in [10]. The dualization of the coupling constraints for the scenario groups in each time period along the time horizon allows for an ALD approach. A parallel-computing scheme benefits from the related structure.) Oil Supply, Transformation and Distribution Planning under Uncertainty. The problem to address is a modeling and algorithmic approach for optimizing the logistics of supplying, transformation and distribution scheduling of oil products under uncertainty. The product is transported from its origin to refineries and storage depots, and from there to destinations over a given time horizon. The goal is to obtain a procurement, transportation and production schedule at a minimum expected cost of raw material supply, transformation, transport and storage. The schedule must satisfy the end-product demand, subject to supply limitations, transformation constraints, transportation mode capacity and stock limitations, and some other logical, technical, economic and regulatory constraints. The complication arises from the data uncertainties due to the fact that the information that will be needed for subsequent stages is not available to the decision-maker when the decision has to be made. The problem then exhibits uncertain supplies and demands as well as uncertain raw material spot prices, refinery productions, stock inventories, and transformation means and transportation mode availability, among others. The uncertainty is modeled via scenario analysis. It results in a huge LP problem. The model representation is performed by using a splitting variable scheme in a two-stage approach under the non-anticipative principle. A novel scheme for dealing with the multi-stage linking constraints under uncertainty is presented in [16]. The mathematical programming model is amenable for ALD schemes. In-house Production and Outsourcing Planning via Scenario Analysis. The planning and utilization of production capacity is one of the most important managerial responsibilities in manufacturing. In particular, the problem consists of deciding how much in-house production and how much outsourcing is required at each time period along a planning horizon, such that the production capacity constraints, the product stock limitations and the order backlog requirements are satisfied. Such decisions have to be made in the face of uncertainty in several important parameters, the most important of these unknowns being market demand for the products to be manufactured. The uncertainty is treated via scenario analysis. Several alternatives for a multi-stage case are considered in [13], namely, complete recourse, partial recourse for in-house production, partial recourse for outsourcing and simple recourse. Note that the non-anticipativity principle is preserved for the first three strategies. Supply Chain Management Under Uncertainty. A two-stage modeling and algorithmic approach for multi-period manufacturing, assembly and distribution supply chain management under uncertainty in product demand and component supplying cost and delivery (among other parameters) has been developed via scenario analysis (see [11]). The supply chain has the following elements: product-cycle time, bills of material, effective periods segment for component assembling, product demand, maximum backlog and stock allowed, production resources limitation, prime components and replacements, raw components, subassembly and end products, single and multi-level products, in-house production and vendor sourcing, etc. The main goal is to determine the master production schedule as well as the volume and location of protective stock across a manufacturing network for the time periods in the first stage, and for the other periods along the time horizon under each scenario to minimize the regret of wrong decisions. A variety of constraints related to (minimum and maximum) stock limitations, bill of material requirements, production capacity limitations, and demand and backlog requirements are satisfied. The model approach is based on a splitting first-stage variable representation. Novel schemes are presented for representing the mathematical model. By using the expressions of some variables (say, stock levels, prime components utilization and lost demand) the redundancy of some multi-period related constraints could be detected. The resulting Deterministic Equivalent Model allows for decomposing the problem by using a dual approach for the splitting variable representation. ALD schemes can be used in parallel computing environments. (See [9] for a deterministic version of the problem.) Given the large-scale dimensions of some instances, an LP algorithmic Sprint-based approach has been developed. In this scheme the constraint-working matrix is updated at each major iteration by appending the violated constraints, relaxing certain non-active constraints, zero-fixing non-promising variables and freeing promising ones. (See [3] for an extension of the model to strategic planning under uncertainty.) The main decision is related to product selection and plant sizing, location and product

70

assignment to minimize the expected profit along a time horizon over the scenarios minus the investment depreciation cost. It has been modeled as a huge mixed 0-1 program. A so-called "Branch-and-Fix Coordination" approach has been developed (see [2]). Demand Capacity Allocation Planning. Capacity requirement planning is normally based on long-term demand forecasting and part-type mix estimates. In the execution of a production plan, the capacity assumptions previously made are frequently no longer valid. This is because the part-type mix, the operator and the machine availability or the existence of additional resources has changed. This may lead to underload as well as to overload situations for particular time periods. (See [5] for a mixed 0-1 modeling and algorithmic approach for the problem-solving.) Given the large-scale dimensions of many instances, it is unrealistic to pretend to obtain the optimal solution in affordable computing time. However, a heuristic algorithm, so-called "Fix-and-Relax," is proposed to partially explore the branch-and-cut tree. The approach takes into account all available information at the shop-floor level, such as: machine availability over the planning horizon; unit processing time for the part types; machine set-up time required while changing part types that belong to different families; storable and non-storable resources availability and consumption; part-type demand over the time horizon; and bounds on production rate and backlogging. Furthermore, the model correctly accounts for costs corresponding to time-period overlapping set-ups. Sequential Ordering Problem. The Sequential Ordering Problem consists of finding the appropriate permutation of nodes from a directed graph such that a given parameter-in our case, the length of the Hamiltonian path-is minimized and certain constraints are satisfied. The main application field lies in the production sector, where a part type is defined as a list of operations to be performed once and, although there is some flexibility in the execution sequence of the operations, there are precedence relationships between the executions of some operations. The goal is to sequence the operations to minimize the "makespan" satisfying the precedence. We can also consider jobs instead of operations, transportation costs instead of set-up costs and release and due dates to be satisfied for given jobs (see [1], [4], [12] and [18]). Resource-Constrained Operations Sequencing and Scheduling. A pure 0-1 model is the core of an application for production sequencing and scheduling in a deterministic multi-period, multi-task environment (see [8]). The problem consists of determining the scheduling of a set of items (jobs, in our case) to be assigned to a process along a given time horizon (a day, week, month), so that the release and due dates of the items are satisfied. There are a fixed number of time periods during which each item must be assigned (e.g., produced, maintained) along the time horizon. On the other hand, there is a set of groups of items, and a set of classes of items, such that only one group of items for each class can be assigned. A unique time interval has to be selected for the assignment of each item if its group is selected. The items can also be distributed in different types, such that only one item from each type can be simultaneously in assignment (e.g., only one item can be processed at once in dedicated machines). Different types of precedence relationships in the assignment of the items are considered. There is a set of resources with limited availability along the time horizon. The items may require a different amount of the resources in each of their production time periods. The goal consists of determining a feasible schedule for the item assignment to minimize the given objective function. Different alternative functions are considered, namely, the makespan and the total assignment cost. This type of problems can frame such different applications as energy generators [6] and other production units [7] maintenance scheduling, projects selection and "periodification" [17] and, obviously, operations sequencing and scheduling. From a practical point of view, and due to its combinatorial nature, the problem cannot be solved up to optimality in affordable computing time for large-scale instances, but efficient heuristic approaches are used. Common Features

The applications described above are from the following sectors: electric generation and trading, automotive manufacturing, computer manufacturing, gas, chemical and oil procurement and logistics, maintenance planning, capacity production expansion and investment planning. In general, most of the results can be applied to other sectors as well. Logistics planning (procurement, production, allocation, and distribution planning and scheduling) is one of the common features for all applications. On the other hand, all of them also share a dynamic component, i.e., most of the constraints and variables are time indexed along a time horizon. Most of the applications have uncertain data, e.g., raw material cost, transport time and means availability, product demand and price, resources availability, nature-related parameters such as water exogenous inflow, etc. Some sort of risk management is needed. Some applications have 0-1 variables, i.e., the so-called decision variables. Interesting enough, there is only one

71

deterministic LP application. The hydrothermal power generation planning is the only application that has nonlinear relationships among the variables. The nonlinearity is due to the hydropower generation functions. Finally, the applications are large in scale, which means it is not realistic to seek optimal solutions, especially for the stochastic cases where the uncertainty is represented by a set of scenarios. In terms of technical features, many of the applications outlined above have uncertain parameters in the objective function. The stochastic approach that has been used for dealing with the uncertainty of the parameters is based on scenario analysis. Moreover, the selection of the representative set of scenarios is still an open problem. A splitting variable representation for the Deterministic Equivalent Model of the stochastic problem is proposed for this type of applications. This representation uses a coupling constraint scheme either for linking the scenario submodels or for linking the submodels related to the scenario groups from each stage along the time horizon. It is very amenable for using ALD approaches. Note that the submodel constraint matrix does not vary from one scenario to another and, so, the devices for model generation and initial solution building benefit from it. There are several huge 0-1 dynamic models. A heuristic algorithm-the so-called Fix-and-Relax-has been proposed to partially explore the branch-and-cut tree for obtaining good solutions in the deterministic environments. On the other hand, an exact algorithm-the so-called Branch-and-Fix Coordination-has been proposed to coordinate the execution of the scenario-related branching phases for stochastic 0-1 models. Most of the applications require inter-disciplinary OR teams with skills in different disciplines such as mathematical programming modeling, probability, scenario generation, artificial neural networks, cluster analysis and Monte Carlo simulation, among others. Several applications present algorithmic schemes for parallel optimization based on PC clusters. Some others are well suited for being implemented in parallel computing environments. References

1. 2.

A. Alonso, P. Detti, L.F. Escudero and M.T. Ortuno, "On Dual Based Lower Bounds for the Sequential Ordering Problem with Precedences and Due Dates," (in the referring process), 2001. A.Alonso-Ayuso, L.F. Escudero and M.T. Ortuno, "BFC, a Branch-and-Fix Coordination algorithmic framework for solving stochastic 0-1 programs," (in the referring process), 2001. A. Alonso-Ayuso, L.F. Escudero, A. Garin, M.T. Ortuno and G. Perez, "A Stochastic 0-1 Program based approach for Strategic Supply Chain Planning under Uncertainty," Journal of Global Optimization (accepted for publication), 2001. N. Ascheuer, L.F. Escudero, M. Groetschel and M. Stoer, "A cutting plane approach to the sequential ordering problem (with applications to job scheduling in manufacturing)," SIAM Journal on Optimization, 1993, Vol. 3, pp. 25-42. C. Dillenberger, L.F. Escudero, A. Wollensak and W. Zhang, "On practical resource allocation for production planning and scheduling with period overlapping setups," European Journal of Operational Research, 1994, Vol. 5, pp. 275-286. L.F. Escudero, "On energy generators maintenance scheduling constrained by the hourly distribution of the weekly energy demand," Report G320-3420, IBM Scientific Center, Palo Alto, Calif., 1981. L.F. Escudero, "On maintenance scheduling of production units," European Journal of Operational Research, 1982, Vol. 9, pp. 264-274. L.F. Escudero, "S3 sets. An extension of the Beale-Tomlin special ordered sets," Mathematical Programming, 1988, Vol. 42, pp. 113-124. L.F. Escudero, "CMIT: A Capacitated Multi-level Implosion Tool," European Journal on Operational Research, 1994, Vol. 76, pp. 511-528.

3. 4. 5.
6.

7. 8. 9.

10. L.F. Escudero, J.L. de la Fuente, C. Garcia and F.J. Prieto, "A parallel computation approach for solving multistage
stochastic network problems," 1999, Annals of Operations Research, Vol. 90, pp. 131-160.

11. L.F. Escudero, E. Galindo, G. Garcia, E. Gomez and V. Sabau, "SCHUMANN. A modeling framework for supply chain
management under uncertainty," European Journal of Operational Research, 1999, Vol. 119, pp. 14-34.

12. L.F. Escudero, M. Guignard-Spielberg and K. Malik, "A Lagrangean relax-and-cut approach for the Sequential Ordering
Problem," 1994, Annals of Operations Research, Vol. 50, pp. 219-237

72

13. L.F. Escudero, P.V. Kamesam, A. King and R.J.-B Wets, "Production planning via scenario modelling," Annals of
Operations Research, 1993, Vol. 43, pp. 311-335.

14. L.F. Escudero, I. Paradinas, J. Salmeron and M. Sanchez, "SEGEM: A Simulation approach for Electric Generation
Management," IEEE Transactions on Power Systems, 1998, Vol. 13, -pp. 738-748.

15. L.F. Escudero and M. Pereira, "New trends and OR/MS opportunities in the electricity open market," OR/MS Today,
2000, Vol. 27, No. 2, pp. 42-46.

16. L.F. Escudero, F.J. Quintana and J. Salmeron, "CORO: A modeling and algorithmic framework for oil supply,
17. 18. 19. transformation and distribution optimization under uncertainty," European Journal of Operational Research, 1999, Vol. 114, pp. 638-656. L.F. Escudero and J. Salmeron, "On a Fix-and-Relax framework for large-scale resource constrained project scheduling," (in preparation), 2002. L.F. Escudero and A. Sciomachen, "The job sequencing ordering problem on a card assembly line," in: T.A Ciriani and R.C. Leachman (eds.), "Optimization in industrial environments," J. Wiley, London, 1993, pp. 249-260. L. Legorburu and L.F. Escudero, "OMEGA-IST-1999-12088: An Open Market Energy Generation Allocation ecommerce system," in: B. Stanford-Smith and P.T. Kidd (eds.), "E-business: Key issues, Applications and Technologies," IOS Press, Amsterdam, 2000, pp. 833-837.

Laureano F. Escudero (escudero@umh.es) is a professor at Centro de Investigacion-Operativa, Universidad Miguel Hernandez in Elche (Alicante), Spain.

Knowledge Base
Right on Queue (ORMS Today, April 2003, Volume 30, Number 2)

Right on Queue (ORMS Today, April 2003, Volume 30, Number 2)


Published by Derek Atkins, Mehmet A. Begen, Bailey Kluczny, Ani on 30/01/2007

OR models improve passenger flows and customer service at Vancouver International Airport

73

Right on Queue OR models improve passenger flows and customer service at Vancouver International Airport By Derek Atkins, Mehmet A. Begen, Bailey Kluczny, Anita Parkinson and Martin L. Puterman

As operations research professionals we spend a considerable amount of time traveling to client sites, academic meetings and university seminars. After Sept. 11, 2001, these trips became longer as heightened security measures led to new and more complex security screening processes, sometimes resulting in longer lines and decreased throughput at security checkpoints. Immediately following 9/11, it was not unheard of to spend more time in security lines than in the air. A timely and innovative OR-based study carried out by Vancouver International Airport Authority (YVRAA) in conjunction with students, staff and faculty from University of British Columbia's (UBC) Commerce's Centre for Operations Excellence (COE) showed that through efficient scheduling and job deployment, 90 percent of Vancouver International Airport (YVR) passengers could expect to wait no longer than 10 minutes at pre-board screening (PBS) security points. The Vancouver International Airport is operated and managed by the Vancouver International Airport Authority. This communitybased, not for-profit organization operates under a long-term lease from the Canadian government. Its focus on safety, security and customer service has contributed to YVR's ranking among the top 10 airports in the world. To maintain its excellent customer service standards and in anticipation of new government regulations, airport management sought to take leadership in improving customer flow through its airport security checkpoints. It was at this point that YVRAA turned to the COE for assistance. The COE carried out a two-phase study (see the Figure 1 schematic) to address YVRAA managerial needs. The study focussed on the operation of the two domestic, the trans-border and the international pre-board screening locations. In the first phase, the research team developed the YVRAA Security Queuing Simulation Model to compare several operational strategies and determine staff levels required to obtain achievable service standards. The second phase was to generate shift schedules that would achieve these standards at minimal cost.

Figure 1: Schematic overview of study components and outcomes. Pre-Board Screening

While most of us have passed through pre-board screening, it is unlikely that we have paid much attention to the intricacies of this process. With process flow software and stop watches in hand, the research team viewed the pre-board screening operation through analysts' eyes. Team members (primarily students) spent several days, some starting at 5 a.m., observing the airport's four pre-board security locations. They observed the flow of passengers through the whole screening process, and collected data on passenger characteristics and time spent at each of the individual process steps. This activity alone highlighted some areas for immediate improvement. Recommendations included better signage on payment of the Airport Improvement Fee (AIF), asking passengers to boot up their laptops and place their metal possessions into special containers before entering the pre-boarding screening area, and reconfiguring the layout to allow more space for manual searches. Measures like these immediately cut down on bottlenecks

74

and improved the flow of passengers through the pre-board screening area. Observations were corroborated by interviews with security personnel. Bringing all the pieces together gave us a thorough understanding of the system's processes and procedures and enabled us to develop preliminary process maps. The preliminary maps underwent a number of revisions to ensure that all parties agreed that they accurately represented system flows and operations. The Airport Authority provided AIF data from each of the airport's four screening points. AIF data was used as a proxy for passenger volumes at each of the four screening points and was one of the main inputs for the simulation model. It was also used to validate the departing passenger generator used in the staff scheduling analysis described below. Simulation Development and Benefits

To overcome the complexity of the system, modeling began with only a single screening line. This gave us insight into which aspects of the process we understood well and where we required additional observation and data. Once we were satisfied with our model of a single screening line, we extended it to the full system consisting of five parallel screening lines as depicted in the simulation screen shot (Figure 2).

Figure 2: Screenshot of YVRAA Security Simulation showing the pre-board screening configuration in the domestic terminal. Passenger colors indicate the number of bags carried and security are represented in red when active and green when inactive. To validate our simulation, we collected additional waiting time and throughput data. We compared the simulation output to this data and revised the logic and service time data until we were satisfied that the simulation output agreed with the observations. Since pre-board screening operations throughout the airport are similar, further data collection allowed us to extend the model to all four screening areas. Changing parameters such as the bag search ratio and the number of bags carried allowed us to model the international and trans-border screening areas in addition to the domestic screening points. After this step was completed, we were able to conduct meaningful analyses and proceed with the next phases of the project. The YVRAA Security Queuing Simulation, which was developed in ARENA 6.0, has become a valuable tool to visualize pre-board screening operations identify bottlenecks and conduct "what-if" analyses. Simulation output statistics provide resource utilization, queue lengths and time spent in the system measurements. Using the YVRAA Security Queuing Simulation, the Airport Authority is able to anticipate the impact of a change in passenger numbers or staffing levels on waiting times. It showed that under some conditions it is more efficient to have two security lines fully

75

staffed than five partially staffed, and that increasing staffing levels could be more effective than acquiring additional expensive machinery. Most importantly, it showed what staffing levels and shift configurations were required to realize the goal of 90 percent of passengers waiting a maximum of 10 minutes in line. YVRAA's Vice President of Operations Craig Richmond noted that "the simulation model helps answer many other questions that may seem simple but can be extremely complex to answer." Some key observations included:

The number of bags that must be searched has a significant impact on system throughput. Any actions that reduce the number of carry-on baggage requiring search will contribute greatly to the efficiency of the pre-board screening system. The system throughput is more sensitive to changes in the fraction of bags requiring search than to changes in the number of passengers requiring secondary metal detection. It was more efficient to have security officers who identify an object at the x-ray machine search the baggage than to use separate x-ray and search personnel. In most cases it is better to allocate additional personnel as searchers as opposed to wanders. Additional screening officers should be allocated prior to the formation of queues. It was more difficult to reduce queues than avoid them. Simulation was useful for establishing service level criteria, and it was fundamental for estimating the number of staff required to achieve them.

We determined the best allocation of staff to the search and wanding tasks by evaluating a range of allocation scenarios. We analyzed each PBS location separately because different passenger characteristics produced different searching and wanding ratios. The simulation model was run using a constant, saturated, unbroken stream of passengers so that system resources were working at full capacity. Throughput was calculated by dividing the number of passengers processed by the number of simulated hours. Results were summarized in tabular form and provided to Airport Authority management. In most cases, the marginal increase in throughput was greater when adding a wander than when adding an additional searcher. Shift Scheduling

The second phase of the project sought to determine shift schedules to achieve the 90-10 service criteria with a minimum number of staff hours. Our approach combined a passenger load-forecasting model, simulation to determine staffing requirements across the four PBS lines and linear programming to determine an optimal allocation of shifts. Shift schedules were developed on a daily basis and allowed movement of staff between different PBS locations to take into account the different load patterns. To be useful by YVRAA analysts, our models had to be easy to understand, quick to execute and compatible with readily available analysis tools. Thus, we focussed on integrating flight schedules, the simulation and the optimization model through Microsoft Excel. Demand Generation

In forecasting passenger demand at each PBS location, the objective was to turn a daily flight schedule listing the departure time, gate number and number of seats for each aircraft, into the estimated number of passengers expected at each PBS location at each 10-minute interval throughout the day. Fortuitously, AIF sales data was available to validate our approach. The AIF is paid by departing passengers at collection points immediately preceding the PBS locations. Comparing this data to hand counts revealed that the AIF data was a good proxy for passenger arrival time data. We used the flight schedule to determine the capacity and departure time for each flight and estimated the number of passengers on a flight by multiplying its capacity by the anticipated load factor. We then allocated the estimated number of passengers to a time period prior to departure using a triangular distribution. We aggregated data across flights to determine the number of passengers arriving in each time slot. Using the AIF data for validation, different triangular distributions were evaluated until we found a best one for each PBS location. Passenger connection ratios were acquired to determine the approximate connection ratio to use with the passenger generator for the international pier.

76

We were very pleased with these results; in general, simple triangular distributions produced a demand profile that consistently mirrored most of the peaks and valleys of the AIF data, and was very close in magnitude at the key morning and evening departure times. We settled on separate distributions for flights to Canadian destinations, and flights to the United States and other international destinations. For flights to Canadian destinations, we used a 90-40-20 triangle in which the first passenger arrived at the PBS 90 minutes before departure, the last passenger arrived 20 minutes before departure, and the most likely arrival time at PBS was 40 minutes before departure. For the international and trans-border flights, a 150-80-20 triangle was selected reflecting the earlier suggested check in times for these flights. Achieving the Service Criterion

The next step was to translate this demand into staffing requirements. We created a look-up table for each PBS location listing expected passenger demand for a 10-minute interval and the corresponding number of screening staff required to achieve the 9010 service criterion. The simulation was used intensively at this point. For each demand rate, a staffing level was selected and the simulation run at the constant staffing level and constant passenger demand rate. If the service criterion was met over the duration of the simulation, the staffing level was reduced; if not, it was increased. The simulation was run until the minimum staffing level to meet the service criterion was determined. This procedure was repeated until the look-up table was populated. It became apparent that we needed to know how the staff were deployed between the wanding and bag-searching tasks, as well as how many equipment lines should be open and how the staff were distributed between them. Using the data in the throughput table, an "optimal staff allocation" defined as that with the highest maximum passenger throughput, was determined for each number of screening staff, at each PBS location (see Figure 3).

Figure 3: A graphical representation of an optimal staff allocation scenario. Different colors represent the number of lines that should be open for each total staffing level and the numbers within the bars represent the allocation of searchers and wanders to each security line.

77

Figure 4: Graphical representation of staff requirements of the optimal shift schedule for one scenario. The solid blue area represents the total staff requirements to achieve the target service level; the dashed line represents the total staff requirements plus surplus factor. The top line shows the staffing levels required by the optimal shift schedule. Developing the Shift Schedules

The final task was to determine a minimum-cost shift schedule that achieves the required staffing levels. We aggregated the staffing levels across the four locations to obtain an airport-wide staffing requirement. Given a list of the possible shifts, we used a linear programming model to determine the optimal shift schedule that satisfied the airport-wide staffing requirements in each time period. A surplus factor was added to account for staff breaks. YVRAA management wanted the flexibility to evaluate a wide range of shift combinations including different durations and different start times. Our approach was sufficiently flexible to do this. Conclusion

We believe that YVRAA was extremely far-sighted in recognizing the impact OR methods can achieve in improving security operations. This project gave students and faculty the opportunity to work on a challenging and significant applied project that set high standards for managing airport security operations. The newly formed Canadian Air Transport Security Authority, the agency now responsible for passenger screening at 89 Canadian airports, is considering COE proposals to update this study for the current pre-board screening configurations and to extend the analysis to designing, operating and staffing the hold baggage screening operations. For more on this and other applied projects, refer to http://www.coe.ubc.ca/. Additional information about Vancouver International Airport is available at http://www.yvr.ca/.

Mehmet Begen and Anita Parkinson received their M.Sc. degrees in Management Science through the COE program, and Bailey Kluczny received a B.Com degree from UBC Commerce. Begen is now a research analyst in the Centre for Operations Excellence, Parkinson is pursuing her Ph.D. at UBC, and Bailey is considering graduate studies. Derek Atkins and Martin L. Puterman are professors in the Operations and Logistics Division of UBC Commerce. Puterman is founder and director of the COE, and Atkins is associate vice president of planning for UBC.

Knowledge Bas
78

Knowledge Base
Fighting Flight Delays

Fighting Flight Delays


Published by Alex Ross and Alison Swain on 30/05/2007

Operations research packs a punch while delivering a punctual operation for British Airways. Fighting Flight Delays Operations research packs a punch while delivering a punctual operation for British Airways. By Alex Ross and Alison Swain

British Airways (BA) is a major international airline, operating 750 flights per day to 130 destinations worldwide, using a fleet of 230 aircraft. This large, complex network, coupled with a very complex, constrained operating environment, particularly at the BA main base at London Heathrow Airport (LHR), makes planning and delivering a punctual operation very challenging. Further factors such as asset utilization, commercial priorities and operational logistics clearly add to the challenge. Punctuality, however, is fundamental to the BA business. Industry surveys consistently identify departure punctuality as a key determinant of customer satisfaction, especially on shorter flights (see Figure 1). [The term "Departure" relates to the aircraft wheels first moving at the start of a flight. Usually, this occurs when a tug pushes the aircraft backward from the jetty.] Furthermore, although both the airspace and airports infrastructure are becoming ever more congested, legislative penalties for operational disruption are on the increase.

79

Figure 1: Rating dissatisfaction. Each point on the graph represents a passenger satisfaction criterion such as check-in, on-board food, cabin crew, arrival baggage, etc., but labels have been removed for confidentiality reasons. The chart aims to illustrate the relative importance of punctuality. For large, networked airlines like BA, the process of preparing an aircraft for departure is complex. It could be argued that most organizations face one of two basic kinds of operational complexity, namely:

Process complexity complex delivery process but low frequency (e.g., launching a space rocket, building a stadium); or Volume complexity - simple delivery process but high volume (e.g., FMCG assembly lines).

The operations function at a major airline, however, is faced with both kinds of complexity every day. Every flight departure relies on a range of resources integrating together at the correct time (see Figure 2) and, at BA, all of this must happen 750 times per day.

Figure 2: For every flight, a range of resources must integrate at the correct time. Despite this complexity, the delivery of a reliable, punctual flying program is critical to the sustained success of an airline. To most businesses, the operating schedule delivers the end product (e.g., automobile assembly line) but, for an airline, the flying schedule is the end product. That is, the basic product BA sells to customers is a means of traveling from A to B at a specified time. Any meaningful improvement in punctuality could therefore provide an airline with a significant, competitive advantage. The Problem

Airlines generally publish punctuality data at the 15-minute level. These statistics illustrate how well the airline departs flights within 15 minutes of their scheduled time of departure (STD). In addition, many airlines including BA also measure punctuality at

80

the zero-minutes level for internal purposes. This report considers the zero-minutes measure of punctuality. Against both measures, BA punctuality has, in general, been below corporate targets for some time and, despite significant management attention, the underlying trend has been downwards (see Figure 3).

Figure 3: Departure punctuality at British Airways. Note that the y-axis is omitted for confidentiality reasons. The graph aims to illustrate relative performance. Put simply, the question posed to the operational research team by BA senior management was "Why?" The O.R. project objective was therefore to illustrate the nature of the punctuality issue in a manner that would focus directors' attention on key areas for improvement. The O.R. Team

The Operational Research Group at BA is a well-established internal consultancy. Working directly for managers across the business, the O.R. remit is to challenge the way things are done and get involved in the most important decisions BA is making. O.R. provides three main services to BA:

problem structuring business modeling complex data analysis

Within these, the full range of hard and soft O.R. skills are practiced. The O.R. group comprises 50 to 60 people, divided into three teams:

Yield - supporting the Revenue Management Department End-to-End - supporting the planning functions of BA (from fleet and network planning right through to operations control) Projects - covers the remainder of the airline with particular focus on products and brands, sales and marketing, and airports.

The group works as one unit, with many opportunities for cross-functional work. Regular movement across the teams is actively encouraged. Both authors are members of the End-to-End team. The Approach

As the problem posed to O.R. was unstructured, our first task was to structure it sensibly and scope a manageable project. The approach taken followed several basic principles:

81

Learn from previous work. The challenges of explaining, measuring and, ultimately, improving flight punctuality have been the topic of considerable O.R. analysis over the years by academics, consultancies and, of course, the airlines themselves. In our opinion, much of this work has delivered limited success at best and BA is no exception. Various BA studies using approaches such as simulation, data mining and multiple regression have uncovered specific issues but never really aided understanding of the complete picture. For instance, several previous attempts to explain punctuality in a "bottom-up" manner using regression techniques to identify key punctuality predictor variables have been largely unsuccessful. Due mainly to considerable "noise" in the dependency network of activities for a flight departure (see Figure 4), it is very difficult to identify strong correlations.

Figure 4: A simplified dependency network for flight departure. (click here to view larger version on a separate window) We concluded that the problem needed to be considered in a completely different way. Our work is therefore founded on the use of very simple "top-down" illustrations to identify the key punctuality influencers. Avoid obvious implementation blockers. Traditionally, punctuality studies at BA have centered on the analysis of delay codes. BA assigns a code(s) to each delayed flight departure in order to explain the reason(s) for any delay (e.g., aircraft technical problem, late crew, delayed air traffic clearance, weather, etc.). These codes are assigned manually by flight dispatchers, hence are somewhat subjective. Code assignment is a contentious topic amongst the different operational departments. Although analysis of delay codes is a very simple task, it is often unsuitable for identifying the underlying, root cause of the problem and inevitably faces resistance from clients at the project implementation stage. It was therefore imperative that this project must avoid the use of delay codes completely from the outset. Instead, we used eventbased, process measures founded on clean, unambiguous ACARS (Aircraft Communication Addressing and Reporting System) data sourced directly from systems on-board the aircraft. Scope a viable project. The entire BA operation is too large to tackle holistically, hence we considered the BA London Heathrow (LHR) short-haul (SH) operation initially. This comprises 2,800 flights per week to and from 45 United Kingdom domestic and European airports (see Figure 5).

82

Figure 5: British Airways' London Heathrow short-haul operation comprises 2,800 flights per week to and from 45 U.K. and European airports. The LHR SH operation entails more flights than the LHR long-haul (LH) operation and the entire BA operation at London Gatwick airport (the second BA hub) combined. LHR SH performance therefore dominates BA overall network punctuality. As well as being the largest operation, LHR SH also has the lowest operational performance. Improving LHR SH punctuality is therefore fundamental to achieving BA corporate punctuality targets. The whole LHR SH operation is still too big and complex, however, hence we initially focused on LHR SH first-wave. The definition of first-wave is basically the first flight of the day on each aircraft. This means that first-wave comprises two distinct elements:

the set of inbound flights to LHR on aircraft that spend the night at U.K. regional or European airports, and the set of outbound flights from LHR on aircraft that overnight in London.

This particular dataset was chosen because:

There is a very high correlation between first-wave punctuality and full-day punctuality. If BA has an unpunctual firstwave then there is little opportunity to "catch up" due mainly to high asset utilization; hence "knock-on" delays accumulate and the entire day is likely to be poor operationally. A relatively small increase in first-wave punctuality therefore delivers a disproportionately large benefit. The first-wave data are the "cleanest" operational data available. As these are the first flights of the day on each aircraft, they are not influenced by knock-on impacts of earlier performance. The first-wave chain of events (i.e., preparing and flying the first flight of the aircraft and then preparing it for the second departure) constitutes 150 flights per day, hence is significant to overall performance.

Focus on the correct metrics. As illustrated in Figure 4, a punctual departure is dependent on BA having the aircraft ready-to-go (RtG) on-time and also receiving pushback clearance from Air Traffic Control (ATC). An aircraft is considered RtG when the last aircraft door is closed prior to departure.

83

We focused on the RtG event rather than the departure event as it is obviously a better measure of BA internal performance and is very highly correlated with departure punctuality in any case. If an aircraft is RtG at three minutes before STD, then a punctual pushback is likely (see Figure 6).

Figure 6: Ready-to-go status is highly correlated with departure punctuality. RtG performance can be viewed as being dependent on two key factors:

the extent to which delivery teams on the day of operation are provided with the opportunity to deliver a punctual RtG flight, and the extent to which the delivery teams convert such opportunities into punctual RtG flights.

In view of this, we created two new, high-level metrics:

1.

On-time achievables (OTA). This measures those occasions where there is sufficient aircraft ground-time available between its arrival and subsequent STD to execute all activities required to make it ready for departure. OTA is basically a measure of the effectiveness of strategic and tactical schedule planning processes.

2.

OTA conversion rate (OCR). This measures how well the airport delivery teams actually convert these OTA opportunities into on-time RtG departures. OCR is basically a measure of delivery conformance.

Identify a means of capturing directors' interest. Most directors simply want a concise explanation of what the problem is and where they need to invest in order to fix it. Given this, we noticed that a key element missing from previous BA punctuality studies was a simple, compelling illustration of where the punctuality issues lie and therefore where corporate focus is required. Creating such a picture therefore became our immediate goal, and we produced the "Waterfall View" to illustrate the first-wave chain of events and the influence that each element in the chain has on punctuality. The example Waterfall diagram (see Figure 7) illustrates the first-wave chain of events for aircraft flying from U.K. and European airports to LHR.

84

Figure 7: The Waterfall diagram, which shows the impact of performance on punctuality at key points, helped win over BA directors. Note: The diagram contains example data for illustration purposes only. The Waterfall illustrates the impact of performance in each key area on overall punctuality:

1. 2. 3. 4. 5. 6.
7.

100 percent of aircraft are correctly positioned at U.K. and European airports at the beginning of the day. These airports get 71 percent of aircraft RtG on time. Due to subsequent ATC delays, 63 percent of flights depart pushback on time. [Note: Due to the extremely high volume of early morning flights into LHR, it is not uncommon for ATC to hold SH flights on the ground at their departure airport.] 65 percent arrive on time at LHR. This signifies that, for this example dataset, the actual flying times over the period were slightly shorter than the scheduled flying times. Spare aircraft time planned into the schedule provides 83 percent of aircraft with at least the standard time required to turn at LHR, i.e., OTA=83 percent. Although 83 percent of arrivals could theoretically have been converted into punctual RtG departures, LHR turnaround performance means that only 59 percent of flights are actually RtG on-time, i.e., OCR=72 percent. After ATC delays, 54 percent of flights turning from the first-wave depart on time. One interesting feature is the surprisingly low first-wave RtG performance (71 percent in this example). Theoretically, 100 percent RtG should be achievable on first-wave but, in reality, any issues regarding departure processes such as baggage loading or passenger boarding can quickly reduce this figure. Additionally, in the case of LHR, SH first-wave departures carry significant numbers of connecting passengers from early-morning long-haul LHR arrivals, hence this can create dependencies.

The Results

The "Waterfall" proved to be a very successful means of engaging senior management, and the insight gained from the Waterfall work has been the catalyst for a range of high-profile, cross-functional initiatives. A few examples are: First-wave directive. A corporate directive on first-wave has been initiated and a large communications exercise undertaken to inform front-line staff of the huge importance of a punctual first-wave. New first-wave processes and measures have been implemented, and first-wave performance is being examined daily on an ongoing basis. As illustrated in Figure 4, a large number of individual activities are involved in the BA aircraft turnaround process. Coupled with this, LHR OCR is below target for first-wave turns. An initiative is therefore underway to identify potential opportunities to simplify first-wave turns. Improved contingency profiling and utilization of aircraft spare time. Any time that an aircraft is not flying is very expensive,

85

so it is important that any ground-time is exploited fully. At BA, the term "standard working time" (SWT) denotes the agreed minimum time required to turn an aircraft between flights. Put another way, the SWT represents the planned critical path through the network of activities for an aircraft turn. An aircraft will therefore never be scheduled with a ground-time less than SWT, although sub-SWT ground-times can obviously occur on the day of operation due to inbound aircraft delays. Our analysis highlighted two key points, as illustrated in Figure 8:

OCR "peaks" when the available turn time on the day of operation is approximately 15 minutes longer than SWT but then plateaus, hence punctuality will not improve significantly by increasing available turn time beyond this point. This is due partly to LHR departure processes being planned as almost just-in-time, and an extensive program of process work is being undertaken in this area as BA prepares to move to a new home at the brand new LHR Terminal 5. There are a significant number of very long turns, and this expensive, spare aircraft time is largely wasted due to the flat OCR. O.R. work identified that significant improvements in the distribution of aircraft ground-time are achievable and deliver a much more operationally robust schedule. OTA improvement of 3 percent to 5 percent is now being delivered with negligible compromise to the commercial value of the flying schedule.

Figure 8: Analysis showed that OCR "peaks" and then plateaus when available turn is 15 minutes longer than SWT, and there are a significant number of long, costly "turns." Reduced crew changes. A seemingly obvious way to tackle low OCR is to simplify the turnaround process. As mentioned above, BA, like other airlines, invests heavily in improving the effectiveness of ground processes. One complexity of a BA turn, however, is that it often involves a crew change. This means that the pilots who have flown an aircraft to LHR then leave it and join another aircraft, while new pilots join this aircraft, often from a different inbound flight. This complexity introduces additional potential failure points and also means the turn cannot be managed by a single team, hence associated team-working benefits are lost. O.R. analysis identified reduced crew changes as a potential quick win for punctuality. Initial analysis indicated that departures following a turn involving a crew change are, on average, 7 percent less punctual than departures where crew remain on board the same aircraft. Working with the crew planning team at BA, O.R. practically halved the number of planned crew changes on the crucial early morning part of the schedule at no additional crew cost. Crew complaints also decreased significantly as a result of the changes. Operational metrics. The OTA and OCR measures have been embedded as key operational metrics at BA. They promote more clarity and accountability than previous metrics and are now an intrinsic part of daily performance monitoring. Furthermore, the "achievable/achieved" principle has been adopted in various areas across the airline. For instance, the effectiveness of the passenger boarding process is now being measured in terms of opportunities to board on time and the associated conversion rate. The Waterfall picture. The Waterfall view itself has been accepted across the business as a clear, effective, high-level view of the punctuality drivers. A flexible Waterfall tool is now established on the BA Intranet, offering capability to view punctuality performance of various cuts of the operation in Waterfall format. The Waterfall has also been extended beyond its original scope. It now illustrates the full operational day rather than just the first

86

wave and has been adopted as a tool for illustrating the performance of lower-level processes. One very high-profile application of the Waterfall has been to inform the BA leadership team of key differences between LHR and LGW performance. It brought simple, factual evidence to the table and helped explode several long-standing myths. Conclusions

Unfortunately, many insightful, innovative O.R. studies falter at the client implementation stage. Our reflections are therefore mainly around why we think this particular project enjoyed very successful implementation and our conclusions are very simple:

Regardless of how complex the problem is, find a very simple means of capturing it in order to get senior clients interested. There should be one simple, compelling picture that everyone will remember. Get a critical mass of senior clients "hooked" and the publicity generated will bring others along. Be prepared to invest in "selling" the project. The "downside" of a high-profile project is that everyone wants to know about it; the most surprising aspect of this project for us was the O.R. man-hours we ended up spending on communications and client engagement. On reflection, although this is not the most interesting aspect of O.R. work and it sometimes felt like poor use of O.R. time, it was time very well spent indeed. Patience and timing are everything. Especially in large organizations, it can take an extraordinary length of time for new ideas to filter through and become embedded. It took a long time for momentum to build behind our project and even now, one year on, it is still really encouraging just to overhear phrases like "conversion rate" while waiting in the coffee queue!

Alex Ross (alex.ross@ba.com) is a specialist O.R. consultant, British Airways O.R. Team (End-to-end Scheduling), with overall responsibility for all O.R. and process design work on punctuality. Alison Swain is a senior O.R. consultant with the O.R. Team who specializes in schedule design.

Posted by Ioannis Anagnostakis On 30/05/2007

87

You might also like