When people think of computers, they usually think of
silicon chips and circuit boards. Moving from relays to
vacuum tubes to transistors to integrated circuits has
vastly increased the power and speed of computers, but
the essential idea behind the work computers do remains
the algorithm.
An algorithm is a reliable, definable proce-
dure for solving a problem. The idea of the algorithm goes
back to the beginnings of mathematics and elementary
school students are usually taught a variety of algorithms.
For example, the procedure for long division by succes-
sive division, subtraction, and attaching the next digit is
an algorithm.
Since a bona fide algorithm is guaranteed to
work given the specified type of data and the rote following
of a series of steps, the algorithmic approach is naturally
suited to mechanical computation.
Algorithms in Computer Science
Just as a cook learns both general techniques such as how
to sauté or how to reduce a sauce and a repertoire of specific
recipes, a student of computer science learns both general
problem-solving principles and the details of common algo-
rithms. These include a variety of algorithms for organizing
data (see sorting and searching), for numeric problems
(such as generating random numbers or finding primes),
and for the manipulation of data structures (see list pro-
cessing and queue).
A working programmer faced with a new task first tries
to think of familiar algorithms that might be applicable to
the current problem, perhaps with some adaptation. For
example, since a variety of well-tested and well-understood
sorting algorithms have been developed, a programmer is
likely to apply an existing algorithm to a sorting problem
rather than attempt to come up with something entirely
new. Indeed, for most widely used programming languages
there are packages of modules or procedures that imple-
ment commonly needed data structures and algorithms.
If a problem requires the development of a new algo-
rithm, the designer will first attempt to determine whether
the problem can, at least in theory, be solved. Some kinds of problems have
been shown to have no guaranteed answer. If a new algo-
rithm seems feasible, principles found todown into component parts or building up from the sim-
plest case to generate a solution. For exam-
ple, the merge-sort algorithm divides the data to be sorted
into successively smaller portions until they are sorted, and
then merges the sorted portions back together.
Another important aspect of algorithm design is choosing
an appropriate way to organize the data.For example, a sorting algorithm that uses a branch-
ing (tree) structure would probably use a data structure that
implements the nodes of a tree and the operations for adding,
deleting, or moving them.
Once the new algorithm has been outlined, it is often desirable to demonstrate that it will work
for any suitable data. Mathematical techniques such as the
finding and proving of loop invariants (where a true asser-
tion remains true after the loop terminates) can be used to
demonstrate the correctness of the implementation of the
algorithm.
Practical Considerations
It is not enough that an algorithm be reliable and cor-
rect, it must also be accurate and efficient enough for its
intended use. A numerical algorithm that accumulates too
much error through rounding or truncation of intermediate
results may not be accurate enough for a scientific applica-
tion.
An algorithm that works by successive approximation
or convergence on an answer may require too many itera-
tions even for today’s fast computers, or may consume too
much of other computing resources such as memory. On
the other hand, as computers become more and more pow-
erful and processors are combined to create more power-
ful supercomputers , algorithms that were previously consid-
ered impracticable might be reconsidered. Code profiling
(analysis of which program statements are being executed
the most frequently) and techniques for creating more effi-
cient code can help in some cases. It is also necessary to
keep in mind special cases where an otherwise efficient
algorithm becomes much less efficient (for example, a tree
sort may work well for random data but will become badly
unbalanced and slow when dealing with data that is already
sorted or mostly sorted).
Sometimes an exact solution cannot be mathematically
guaranteed or would take too much time and resources to
calculate, but an approximate solution is acceptable. A so-
called “greedy algorithm” can proceed in stages, testing at
each stage whether the solution is “good enough.” Another
approach is to use an algorithm that can produce a rea-
sonable if not optimal solution. For example, if a group of
tasks must be apportioned among several people (or com-
puters) so that all tasks are completed in the shortest pos-
sible time, the time needed to find an exact solution rises
exponentially with the number of workers and tasks. But
an algorithm that first sorts the tasks by decreasing length
and then distributes them among the workers by “dealing”
them one at a time like cards at a bridge table will, as dem-
onstrated by Ron Graham, give an allocation guaranteed to
be within 4/3 of the optimal result—quite suitable for most
applications. (A procedure that can produce a practical,
though not perfect solution is actually not an algorithm but
a heuristic.)
An interesting approach to optimizing the solution to
a problem is allowing a number of separate programs to
“compete,” with those showing the best performance sur-
viving and exchanging pieces of code (“genetic material”)
with other successful programs.
This of course mimics evolution by natural selection in the
0 comments:
Post a Comment