You are on page 1of 4

Software is Mostly Unnecessary

Software is estimated to comprise 10% of our present-day economy, but it has come to
affect much, much more. It has become so ingrained that our civilization will function
less efficiently without it. In the tough times to come, this one of the modern pillars of
commerce will not survive as it is today. Just like the recent financial bubbles, the
software bubble is bloated, costly, and prone to faults. Software is not well-understood,
otherwise it would have been settled during the six decades of its existence. I plumbed its
basis and have provided a control means far less esoteric and greatly superior.

Overall software success statistics are not those typical of an engineering discipline, but
more akin to those of some branch of guess-work. The Standish Chaos Report [a] shows
a (currently) backward trend in software projects’ success rates:
32% Successful (On Time, On Budget, Fully Functional)
44% Challenged (Late, Over Budget, and/or Less than Promised Functionality)
24% Failed (Canceled or never used)
a.
http://www.galorath.com/wp/2009-standish-chaos-report-software-going-downhill.php

Other reports and articles, referenced below, indicate software production efficiency is
only about 50% for the successful 32% of projects and less for the “challenged.” The
“failed” category gets a zero, of course. In sum, there is definitely something lacking in
software “engineering.”

It has been said that “People are hungry for what works.” So why would “people” put up
with what has to be coaxed and kicked and fought every step of the way, even after the
ultimate customer gets it? “Cheerleading?” The perception, even amongst software
creators, must be, there is no alternative. The disclaimers shipped with virtually all
software packages relieve the makers of most responsibilities, those knotty details being
left to the user.

It would seem the technology being used (Turing-type machines, shared-resource


architecture, and software) is inappropriate and inadequate for the greater percentage (the
68% less than successful) of the projects being pursued. The evidence is that few know
how to produce reasonably-priced, faultless, complex system software. It may be beyond
human abilities to do so in a routine fashion. There are good reasons why this is so. The
primary causes for the inappropriateness and inadequacy of software are the impediments
inherent in its technology and practice. (I identify and discuss these ten foundational
impediments in a separate paper.) No matter, I have an alternate system-control means,
the rationale for which is further described below:

“Programming is the only engineering profession that spends half its time and effort on
detecting and correcting mistakes made in the other half of the time.” wrote Bernhard
Beckert in Intelligent Systems, an IEEE Computer Society magazine, in 2006. By
Beckert’s reckoning, the production of software is thus only 50% efficient and levies on
USA’s manufacturers (and consumers) an extra $80 billion or so a year and an unknown
amount of collateral damage.1, 2, 3 The rest of the world suffers commensurately.
2
There is no guarantee that computer products delivered to government, military, industry,
business, or private citizens are software-bug free. Why would we keep pouring resources
into such an ineffective and problem-ridden enterprise, unless we have come to believe
there is no alternative? Surely if there was a reasonable and more straight-forward
approach, we would exercise it. That would be good business. For now, it is software on
top of hardware—or nothing. Instead of fixing bugs that continually occur, even after six
decades of experience, why don’t we find ways to reduce our dependency on software?

The subject of software brings to mind the embedded control systems upon which most
of the world’s software runs. Man-years of engineering labor go into the design and
manufacture of those control-electronics (computers, microprocessors, and
microcontrollers) that interact with us and directly affect our lives and well-being many
times a day. But it is not enough to expend man-years creating the electronic hardware.
Similar efforts are required to design and produce software for those same control
systems. Software specification, verification, integration with hardware, and debugging
continue to be problematic. Although seemingly vital, as it is used in many products or in
their manufacturing processes, software is mostly unnecessary as well as a detrimental
complication for defense4 and for controls’ safety.

Hardware is essential. It performs all the logic and arithmetic operations, and the
reception, decoding, and storage of sensory information and data. Every effect created by
a hardware-software combination is initiated in and executed by hardware, not software.
Hardware even houses (control store) and paces (instruction counter) the software
instructions. Hardware is indispensable. Controllers can’t work without it. It is not the
case that hardware is dependent upon software for functionality. It is the other way
around. Software depends upon hardware to code it, house it, access it, step through it,
and implement it. That turns out to be a benefit, because hardware can be tested with
finite resources, while software testing may never end. “Complete testing of a moderately
complex software module is infeasible. Defect-free software product can not be
assured.”2 “Software essentially requires infinite testing, whereas hardware can usually be
tested exhaustively.”3

The problems in software multiply with complexity, which climbs ever higher. Is
software really necessary? If systems relied more on hardware and less on software, some
of our reliability problems would simply vanish and time- and safety-critical applications
improve. Wouldn’t it be a good thing to surely and systematically decrease the use of and
dependence upon software, in favor of the more reliable and testable hardware?

What is software, anyway? We can find that software selects, sequences, and times the
hardware functions. But software itself is sequenced and timed by hardware. Software, in
the final analysis, can only tell the hardware what to do, in what order and, in some cases,
how long to do the activity. Software provides direction to the hardware, but hardware
actually performs all the functions. Software can only tell the hardware what it is time to
do:

001010 (now it is time to) Do X,
001012 (now it is time to) Do Y,
001014 (now it is time to) Do Z,

3
Hardware is the necessary and more robust physical layer that constitutes—and connects
—subsystems. Software consists of fragile over-layers of human-composed command
that generate their own problems. If the hardware for control systems was smarter in
matters temporal, it would not need as much software, or perhaps it would need none. If
the hardware could be designed and configured to know when to do the functions and
operations it already knows how to do, it would not need software to tell it when to do
them. Shifting software functions to hardware would be key in reducing software
dependency. Hardware is faster, surer, and more reliable than software. If the hardware
did not need software, it could be more autonomous and simpler. Controller design
efforts would be more focused and efficient as well.

If we could decrease the necessity and use of control-system software by a factor of, say
five, complexity due to software might be reduced even more on a percentage basis. In
that case, problems attributable to software could decline by a factor of ten or more. The
controls-software industry could also shrink by a similar factor. The ensuing
consolidation would greatly benefit the providers and end-users during its growth phase,
as has been shown by other innovative technologies.

A better practical logic, Natural Machine Logic5 (NML), can make controls-hardware
intelligent enough to displace run-time software. It would not be a one-for-one
replacement. NML is not “better” software. It is not “hard” software. We would not
merely substitute hardware elements for software elements.

Some functions and operations that can be done simply in time-domain temporal logic
hardware, rather than imperfectly in space-domain temporal logic software are:
persistence, concurrency, precedence and succession, and repetition. These operations
and others are primitives in NML and all work consistently and compatibly in the time-
domain with the existing space-domain primitives of conjunction and negation, and their
combinations.

NML allows designers to envision and create smarter, leaner, and safer all-hardware
solutions in place of hardware-software solutions, definitely a paradigm shift. Supporting
this major change, design environments would be established to promote and encourage
more sophisticated and specific, but flexible, hardware systems that use minimal software
instead of general-purpose hardware guided by tailor-made software, as is now the case.
NML also fosters local control over central control. Autonomous control nodes would be
located near the action and closely coupled to the sensors and actuators. All necessary
temporal intelligence would be built into the hardware, instead of “bolted-on-top” as it is
now, in software. Continuous self-testing and exception reporting can be integral to the
NML specification and to the finished product.

An Example

NML systems that monitor and control machine behavior in a responsive way need not be
restricted to appliances, automobile subsystems, or factory automation. Higher machine-
intelligence is indeed attainable with NML. Suppose we sought a definitive answer to an
example of a frame problem6, the Yale Shooting Problem7 (YSP).

A strong specification is the most important basis upon which to develop any solution.
Using the temporally advanced and near-natural NML language, we would specify the
4
behavior of a machine or control system that would resolve the Yale Shooting situation
amongst all of its probable variations. NML defines an expanded group of logic operators
having corresponding logic elements and a novel methodology. We would make use of
every one of those faculties. Once the specification covered all the desired contingencies,
the task would be near complete. The NML specification determines the unique logic
operators, logic elements and their interconnections, and the machine architecture—a
schematic, of sorts. The specification thus directs the hardware logic configuration. The
resulting logic element arrangement will monitor and control the behavior specified and
its own functions in order to provide the desired solutions and verified output values (or
answers). NML “source code” for this YSP consists of a single continuous line of 66
symbols, which is the common-sense process-description and is human-readable.8

The NML controller solution demonstrated the correct performance of the Yale Shooting
evaluation as above on seven independent input activities, providing three resultant
outputs, including Fred lives or dies, in an all-hardware arrangement using less than 30
logic elements or gates in a FPGA (field-programmable gate-array). The typical
microprocessor-based solution would use a device of 10,000 gates together with hundreds
of lines of BIOS and application software. The software version would not be able to give
an indisputably valid decision for all of the probable event-sequences, whereas the NML
hardware solution answers them all correctly and immediately. Each NML controller
configuration is specific for the given process to be controlled and monitored.

NML is the new technology that will enable the controls industry to wean itself from
software dependency and gain orders of magnitude in production efficiency and safety. It
needs help in placement, marketing, and resources to give it a start. There are obvious
primary targets for NML introduction such as the embedded controller market, now
employing 98% of the universe of microprocessors and derivatives. Once proven in those
markets, NML will begin to displace code in other software strongholds.

References:
1. “Intelligent Systems and Formal Methods in Software Engineering” by
Bernhard Beckert in IEEE Intelligent Systems, November/December 2006
2. “Software Reliability” by Jiantao Pan, Carnegie-Mellon University, 1999
http://www.ece.cmu.edu/~koopman/des_s99/sw_reliability/
3. Overview of Software Reliability http://sw-assurance.gsfc.nasa.gov/disciplines/reliability/index.php
4. “Software Aspects of Strategic Defense Systems” by David Lorge Parnas, 1985
http://www.klabs.org/richcontent/software_content/papers/parnas_acm_85.pdf
5. “Natural Machine Logic ” by C. Moeller, an unpublished manuscript
6. Frame Problem http://www-personal.umich.edu/~lormand/phil/cogsci/frame.htm
7. “Synthesis of a Solution to the Yale Shooting Problem via Natural Logic With Implications for a
General Solution to the Frame Problem,” in Workshop Handouts, The 10th International Workshop on
Logic & Synthesis,” 12-15 June, 2001, Granlibakken, California.

You might also like