Online Library of Liberty
A project of Liberty Fund, Inc.
This layout is optimized for screen readers and other assistive devices for the visually impaired.
Literature of Liberty: A Review of Contemporary Liberal Thought was published first by the Cato Institute (1978-1979) and later by the Institute for Humane Studies (1980-1982) under the editorial direction of Leonard P. Liggio.
Literature of Liberty: A Review of Contemporary Liberal Thought was published first by the Cato Institute (1978-1979) and later by the Institute for Humane Studies (1980-1982) under the editorial direction of Leonard P. Liggio. It consisted of a lengthy bibliographical essays, editorials, and many shorter reviews of books and journal articles. There were 5 volumes and 20 issues. This issue contains a lengthy bibliographical essay by John Hospers on “The Literature of Ethics in the Twentieth Century.”
This work is copyrighted by the Institute for Humane Studies, George Mason University, Fairfax, Virginia, and is put online with their permission.
This material is put online to further the educational goals of Liberty Fund, Inc. Unless otherwise stated in the Copyright Information section above, this material may be used freely for educational and academic purposes. It may not be used in any way for profit.
Leonard P. Liggio
John V. Cody
John E. Bailey, III
University of Missouri
Western Maryland College
M. E. Bradford
University of Dallas
New Mexico State University
Douglas Den Uyl
Edward C. Facey
John N. Gray
Jesus College, Oxford University
M. E. Grenander
SUNY at Albany
University College, Cork, Ireland
University of California, Los Angeles
Reason Foundation, Santa Barbara
Florida Atlantic University
New York University
New York Council for the Humanities
University College, Cork, Ireland
University of Northern Kentucky
Joseph R. Peden
Baruch College, City University of New York
John T. Sanders
Rochester Institute of Technology
University of Minnesota
University of Newcastle
New South Wales
Harvard Law School
University of Florida
Rochester Institute of Technology
George Mason University
Trenton State College
Santa Barbara, California
Lon L. Fuller's contributions have been a major milestone in modern moral and legal philosophy. Fuller's approach to legal and moral philosophy seemed to parallel the concepts which Pavia University legal theorist, Bruno Leoni, expressed in his Freedom and the Law (Los Angeles, 1972). Both Fuller and Leoni shared a common emphasis in not identifying legislation with true law; both also searched for fundamental legal principles as exemplified in such expressions of law as common law, custom, private arbitration and spontaneous adjustments among private individuals. Recently, F. A. Hayek has attempted to focus attention on the opposition between law and government statutes.
Fuller rejected coercion and hierarchies of command as identifying characteristics of law (see Fuller's The Morality of Law, 1964). Here also Fuller's attitude appears compatible with Hayek's concept of spontaneous order. Social activities and relations, including economic activities or exchanges, are legitimate and workable only in the context of personal freedom of action and choice. Hayek has emphasized the coordination role performed by individual judgments operating in freedom from statute law. Fuller, likewise, has identified a coordination role as central to human action. Fuller has identified the process of discovery of legal principles expressed in common law, custom, etc., as the natural law tradition. He identified with the system of legal thinking in the natural law tradition because it exemplified man's purposive and aspirational nature, emphasized the role of human reason, and opposed the arbitrariness of governing man. The role of command is excluded from the natural law tradition.
Hayek and Leoni have underscored the analogy between legislation and money. The denationalization or depoliticization of money and of law are seen as comparable processes in harmony with the natural order. The causes of the inflation of money and the inflation of legislation, with the good being depreciated and driven from use by the bad, are envisioned as similar in concept and in practice. The intrusion of government whether into the natural monetary process or the natural legal process creates disorder and depreciates the value of both money and law. Sound money and sound law, resembling sound science and technology, must be based upon a process of discovery and choice and not upon coercion, which is the basis of legislation. In a passage reminiscent of Cicero's stress on the value of the slow, organic growth of Roman law, which is the basis of legislation, Leoni notes:
Legislation appears today to be a quick, rational, and far-reaching remedy against every kind of evil or inconvenience, as compared with, say, judicial decisions, the settlement of disputes by private arbiters, conventions, customs, and similar kinds of spontaneous adjustments on the part of individuals. A fact that almost always goes unnoticed is that a remedy by way of legislation may be too quick to be efficacious, too unpredictably far-reaching to be wholly beneficial, and too directly connected with the contingent views and interests of a handful of people (the legislators), whoever they may be, to be, in fact, a remedy for all concerned. Even when all this is noticed, the criticism is usually directed against particular statutes rather than against legislation as such, and a new remedy is always looked for in “better” statutes instead of in something altogether different from legislation.
Eric A. Havelock, in The Liberal Temper in Greek Politics (1957) indicates the Hellenistic origins of the concept of natural law and of its expression in the idea of liberty as freedom from coercion by other men. Leoni has commented:
The paradoxical situation of our times is that we are governed by men, not, as the classical Aristotelian theory would contend, because we are not governed by laws, but because we are. In this situation it would be of very little use to invoke the law against such men. Machiavelli himself would not have been able to contrive a more ingenious device to dignify the will of a tyrant who pretends to be a simple official acting within the framework of a perfectly legal system. If one values individual freedom of action and decision, one cannot avoid the conclusion that there must be something wrong with the whole system.
Lon Fuller, like Dean Roscoe Pound, has been a leading critic of legal positivism. “Specifically the problem is that of choosing between two competing directions of legal thought which may be labeled natural law and legal positivism. In The Law in Quest of Itself (1940), Fuller wrote that natural law “is the view which denies the possibility of a rigid separation of the is and the ought.” Fuller held that being and value were two aspects of a single reality. Nature or reality contained both an is and an ought. Natural law was discoverable in the process of human activity. F. A. Hayek, in The Rule of Law (Studies in Law, Institute for Humane Studies, 1975) observed:
What all the schools of natural law agree upon is the existence of rules which are not of the deliberate making of any lawgiver. They agree that all positive law derives its validity from some rules that have not in this sense been made by men but which can be “found” and that these rules provide both the criterion for the justice of positive law and the ground for men's obedience to it. Whether they seek the answer in divine inspiration or in the inherent powers of human reason, or in principles which are not themselves part of human reason but constitute non-rational factors that govern the working of the human intellect, or whether they conceive of the natural law as permanent and immutable or as variable in content, they all seek to answer a question which positivism does not recognize. For the latter, law by definition consists exclusively of deliberate commands of a human will.
Hayek's characterization of natural law in opposition to positivism or legal realism finds an interesting analogue in H. L. A. Hart's contrast between two extremes of American jurisprudence, “the Nightmare and the Noble Dream.” For Hart, the “Nightmare” is the view that judges always make and never find the law they impose on litigants. The opposed view of the “Noble Dream” is that the judge never functions as a legislator but rather lives up to Lord Radcliffe's ideal judge: the “objective, impartial, erudite, and experienced declarer of the law.” [See H. L. A. Hart, “American Jurisprudence Through English Eyes: The Nightmare and the Noble Dream.” Georgia Law Review 11 (September 1977): 969–989.]
There have been more articles and books written on ethics in the twentieth century than in the entire history of the subject before 1900. Whether a great deal has been added to the wisdom of the ages by this proliferation of essays, is a matter for individual judgment; but that many distinctions have been made and many concepts clarified that were not made or clarified before, is surely beyond question. For reasons of this very proliferation of studies, the following essay does not attempt to exhaustively survey the entire field of twentieth-century ethics. Some major ethical approaches and “schools”—such as the Continental phenomenological, existential, and realist—are passed over for reasons of space. Our survey will primarily focus on other works in the tradition of Anglo-American ethical analysis.
The great work in ethics which concluded the nineteenth century, a book which many scholars feel is the all-time greatest book ever written on ethics, is Henry Sidgwick's The Methods of Ethics. If you want to know what ethical terms are definable and how, or if you desire a clear and worked-out breakdown of the main ethical theories and what can be said for and against each of them; or on a less general level, if you want to know what can be said both in favor of and against laws against prostitution or laws protective of individual privacy, the arguments are all there, laid out with the greatest detail and in a highly systematic manner. No book has ever equaled this one in its thoroughness, scope, and integration of material. The twentieth century may have added new examples or cast fresh light on old ones, but most twentieth-century treatments of ethical issues are incomplete and sketchy compared with Sidgwick's great work. A good introduction to Sidgwick's ethics is contained in C.D. Broad's Five Types of Ethical Theory (the last two chapters), and an extended account and evaluation is to be found in Jerome Schneewind's Sidgwick's Ethics and Victorian Moral Philosophy.
The philosophical background of ethical theory which was most influential on Sidgwick was that of the great eighteenth-century British tradition in ethics: Bishop Butler, Samuel Clarke, Ralph Cudworth, and others. An excellent collection of readings by these eighteenth-century thinkers is Selby-Bigge's anthology, British Moralists.
Then, in 1903, a book appeared which changed the direction of ethical thinking, G.E. Moore's Principia Ethica. Moore's main charge against previous ethical thinkers—all except Sidgwick—was that they had not got the issues straight. John Stuart Mill had defended his utilitarian theory—that the right act was the act which produces the most intrinsically good consequences—without ever making clear whether what he was presenting was a definition of “right” or a statement about right acts: if it was a definition of right, one could object that many acts are believed to be right although they do not produce the best possible consequences in that instance (for example, keeping a promise even though the consequences of breaking it would be better); and if it was a statement about right acts—if we are being told that all right acts also have another characteristic, that they are maximally good-producing—then we do not know how to evaluate the second sentence until we know what the word “right” in it means: to say that one characteristic A always goes along with another characteristic B tells us nothing unless we know what sort of characteristic A is, and this Mill does not tell us. Moore makes similar comments about a whole array of his predecessors in ethical theory. Moore, for his part, held good (but not necessarily “right”) to be indefinable. “Good” was indefinable not in the sense that we could give no synonyms of it (such as “desirable”), but that it was a “simple,” i.e., an unanalyzable concept, like red, for which we could not, in advance, give any verbal instructions which would enable someone to identify it. This is opposed to a “complex” concept, such as horse, for which we could give such instructions, which would enable someone to recognize something as a horse by means of the definition even if that person had never seen a horse.
Moore's book was enormously influential, and the first chapter of Principia Ethica, “The Indefinability of Good,” is to this day reproduced in virtually all of the dozens of anthologies of ethics that have been spawned in the last few decades. Most philosophers do not agree with Moore, but they must come to terms with him. And the effect of his book—not quite what he intended—was to change the entire thrust of ethical thinking for at least a half century in the direction of meta-ethics, which is concerned with the meaning and definability of ethical terms rather than with normative ethics. Normative ethics concerns the discussion of which acts (or classes of acts) are right or wrong, just or unjust, which acts are violation of rights, which are the acts for which a person should be held normally responsible, the relation of acts to motives and intentions and character-traits, all of which had been the traditional subject matter of ethics since classical Greece. The later chapters of Moore's book, dealing with problems of normative ethics, were almost totally neglected in favor of the opening chapter. It is only in the last two decades that normative ethics has again come into its own; but until well after World War II, one could consult the annual index of the principal philosophical magazines—Mind, Philosophical Review, Journal of Philosophy, Ethics, and others—without encountering more than one or two articles on normative ethics in any of them.
To say that a thing is desired is to make a psychological statement about persons; to say that it is desirable is to make a normative statement, namely that it ought to be desired. To say that most persons approve of something, X, is to make a statement about them; but to say that X is right is to say something quite different, which has a consequence (if not part of its meaning) that they ought to approve X. Normative terms occur, of course, in disciplines other than ethics (for example, “beautiful” in aesthetics); but in ethics there occurs an array of normative terms—such as ‘good’, ‘valuable’, ‘desirable’, ‘right’, ‘wrong’, ‘ought’, ‘should’, ‘just’, ‘unjust’, ‘responsible’, and others—which have been the focus of meta-ethical discussion, and an elaborate tracing of their relations to each other. The primary question of meta-ethics, however, is how these terms are related in meaning to non-ethical terms: how ‘desirable’ is related to ‘desired’, how calling an act ‘right’ is related to empirical facts about the act's intentions or consequences, how calling a person (or a motive, or an intention, or a consequence) ‘good’ is related to other data about the person or the consequence. Three main meta-ethical theories have been developed in detail in the twentieth century:
Ethical non-naturalism refers to the view that ethical terms are not analyzable without remainder into non-ethical terms. A proponent of this position may hold that some (or even all) ethical terms are definable by means of other ethical terms (e.g., ‘right’ may be defined as the “act which produces the most good.”); but proponents would further hold that some ethical term, at least one, cannot be defined without remainder by means of non-ethical terms. Thus one cannot escape from the circle of ethical terms. (In just the same manner, mathematical terms, however definable in terms of one another, cannot be defined by means of non-mathematical terms). This residue of meaning is “peculiarly ethical” and not reducible to any combination of empirical or other non-ethical terms.
Moore defended this thesis in the case of ‘good’—at least of ‘good in itself’ or ‘intrinsically good’ (‘instrumentally good’ could be defined as “that which leads to what is intrinsically good”). For example, if someone says (as hedonists do) that pleasure or enjoyment and nothing else is intrinsically good, there is no way to refute him by pointing to empirical facts; one cannot say that this or that empirical feature is good “because that is the very meaning of the term.” Sidgwick had defended the same thesis about ‘right’ in saying that this and other fundamental concepts of ethics are too fundamental to be reducible to any non-ethical concept or combination of such concepts. This thesis was taken up concerning the meaning of ‘right’ in numerous essays, most notably by H.A. Prichard in his famous essay (reprinted almost as often as Moore's Chapter 1) “Does Moral Philosophy Rest on a Mistake?” (1912). Prichard argued that no deductive or inductive arguments would establish the truth of an ethical conclusion, and that fundamental ethical truths must be grasped by a direct act of intuition, together with certain safeguards. This position was refined and systematized by Sir David Ross in his extremely influential book, The Right and the Good.
All forms of ethical naturalism, the second major meta-ethical theory, have in common the thesis that ethical statements about good and right are reducible to statements or sets of statements about the natural world, especially human nature and the human situation. Most famous of the American proponents of ethical naturalism was Ralph Barton Perry, whose books, A General Theory of Value and Realms of Value, were systematic attempts to defend the view that “value” is definable in terms of psychological concepts such as desiring, willing, striving, enjoying, satisfying and the like, and that moral value (goodness) is a species of value which can be analyzed in terms of the general concept of value itself, as consisting in a preponderance of (positive) value. The Scottish philosopher C.A. Campbell argued a similar thesis on the basis of one concept only, human liking. Campbell distinguished, for example, cooperative from non-cooperative (obstructive) likings (obstructive ones being those that get in the way of other ones, either in the individual or in society as a whole); likings as end vs. as means; integral vs. incidental likings, etc.; and he defined goodness in terms of an array of these distinctions. Santayana, Dewey, and Blanshard also presented naturalistic theories. (Blanshard: “To say that I ought to do something is ultimately to say that if a set of ends is to be achieved, whose goodness I cannot deny without making nonsense of my own nature, then I must act in a certain way.”) The Ideal Observer theory, another kind of ethical naturalism, holds that an act is right if it would be approved by an ideal observer, one who fulfills certain qualifications such as impartiality; but this view tells us more about the approver than the act which is approved. Presumably Ayn Rand's view, that good is proper (appropriate) to the life of man qua man and that “man qua man” can be defined in terms of man's essential rational nature, would also be classified as a naturalistic theory, though she herself does not submit to such classifications.
Even the theological view that “X is right,” meaning the same as “God commands X,” is a naturalistic view (“supernatural” is the opposite of “natural” in a different meaning of that term), since it defines rightness entirely by means of divine command, and saying that someone commands something involves no ethical term. One consequence of such a view, however, is that if God does not exist, no ethical terms defined in terms of God are intelligible.
Ethical emotivism, the third major meta-ethical theory, is the view that people use ethical terms not to refer to their ostensible objects (people and actions) but to express certain attitudes toward them and to attempt to evoke those attitudes in others. The pure form of the emotive (or non-cognitivist) theory holds that ethical terms do nothing but this, and no question of the truth or falsity of ethical statements arises because the sentences used in uttering them no more express true or false propositions than do commands (“Shut the door!”) or suggestions (“Let's get out of here.”) or questions (“What time is it?”). It is only one function of sentences to express propositions (i.e., to state what is true or false), and ethical sentences actually belong with commands and suggestions rather than with propositions, in spite of the fact that grammatically they look as if they express propositions: “This is square” and “This is good” are grammatically similar, but the first states a proposition (true or false), whereas the second does not. The classic statement of the pure emotive theory is given in Chapter 6 of A.J. Ayer's Language, Truth, and Logic (1936), following upon a suggestion contained in an article by Winston F. Barnes, and is stated in greater detail in Moritz Schlick's The Problems of Ethics.
This radical emotivism, however, soon underwent considerable modification. According to the modified emotive theory, ethical sentences do act as expressers and evokers of attitudes: if you say “This would be a good thing to do” and I grant it but do nothing, the intended effect of your utterance on me has not been achieved. (This function of ethical sentences has now become almost universally recognized.) But ethical sentences also convey information: just as “This is a good wrench” conveys information, so does “This is a good man.” And since ethical sentences have cognitive (informational) meaning as well as emotive meaning, the whole question of naturalism vs. non-naturalism arises again with regard to the cognitive component of their meaning. Most modified emotivists are naturalists with regard to the cognitive component, and hold that the reason why ethical sentences are not totally reducible to non-ethical sentences is because of the irreducible nature of the emotive component (“This would be a really fine thing to do” is not the same as “This has qualities A, B, and C”).
In a beautifully clear and coherent exposition C.L. Stevenson introduced the thesis of modified emotivism in his article, “The Emotive Meaning of Ethical Terms,” followed by his article “Pursuasive Definitions,” and his profound and influential book, Ethics and Language. Many modifications of emotivism were introduced into the literature, and the periodical literature of the late '40s and early '50s abounded with them. R.M. Hare's The Language of Morals contains a large number of distinctions for clarifying the issue (meaning vs. criteria, description vs. evaluation, commending vs. choosing, etc.). But the most readable and comprehensive statement of this type of view is contained in Patrick Nowell-Smith's Ethics (1954), which contains detailed and insightful analyses of “good” in all its major uses, ethical and non-ethical, tapping the literature from Aristotle to the present day, and also presents a plausible account of the meaning of ethical terms in the light of the many distinctions he puts forward. This book remains the most definitive statement of modified emotivism to the present day, and the reading of it renders almost superfluous other treatments of the issue.
The essential truth in emotivism, that ethical language is used not only to state facts but to express attitudes and to persuade others, has been pretty well absorbed into the literature and is no longer a subject of dispute. Whether, minus the emotive component, the language of ethics can be reduced to that of psychology or some other empirical discipline—whether, for example, “I ought to do X” is reducible to some such formulation as “I would feel obliged to do X if I knew all the empirical facts of the case, and if I were impartial, in a rational frame of mind etc.”—is still very much a subject of controversy. But at any rate it is fairly clear that no specific theory of normative ethics (such as Mill's utilitarianism or Kant's categorical imperative) can be derived from any naturalistic analysis, by saying that, for example, “The greatest happiness of the greatest number is what is good because that after all is the very meaning of the word.”
The nature of value, and in what sense value is subjective and in what sense objective (and the difference between “subjective” and “relative”) are thoroughly and systematically discussed in Ralph Barton Perry, General Theory of Value,8 followed by his Realms of Value.9 Many essays have been written on this topic, but it is well summarized in Nicholas Rescher, An Introduction to the Theory of Value, together with historical bibliographies, especially in nineteenth-century German philosophy.
The concept of intrinsic goodness (“good for its own sake”) as opposed to instrumental goodness (“good for the same of something else”) is lucidly discussed, together with a defense of a plurality of intrinsic goods, by G.E. Moore in his Ethics, Chapter.6, and developed by Brand Blanshard, in Reason & Goodness, who concludes that fulfillment and satisfaction are the two intrinsic goods, all others being directly or indirectly reducible to these. The hedonistic notion that pleasure or satisfaction is the only intrinsic good is best stated by Ralph M. Blake in his classic essay, “Why Not Hedonism?” That the traditional list of intrinsic goods, e.g., pleasure, happiness, knowledge, virtue, are not intrinsic but relative (not to individuals, but) to human nature, is well defended by C.A. Campbell in “Moral and Non-Moral Values.”10 The entire distinction between intrinsic and instrumental good is attacked by John Dewey, Theory of Valuation,” and by Monroe Beardsley in “Intrinsic Value”.
That goodness and badness are in some sense objective properties of the things or situations characterized as good or bad, is eloquently defended in the first two sections of Bertrand Russell's classic essay, “The Elements of Ethics” (1912). When Russell, after a forty-year absence, returned to writing once more on ethics, in his equally eloquent book, Human Society in Ethics and Politics (1955), he defended the view that goodness and badness characterize human subjects and not the objects characterized as good and bad. But his actual judgments on particular moral issues changed not at all as a result of this about-face in ethical theory. An excellent book-length discussion of the concepts of moral goodness is G.H. Von Wright, The Varieties of Goodness.
Psychological egoism, the theory that human beings are all necessarily motivated exclusively by self-interest (if a person does something for another it is only because it makes him feel better to do it than not to do it), is often assumed to be true in psychology, but has been exposed as riddled with logical and methodological errors by philosophers. The best defense of psychological egoism is still that of Moritz Schlick in his book The Problems of Ethics.16
However, the view that people are “wired up” so as always to act out of self-interest has come under extensive attack, and not many philosophers are psychological egoists today, though psychologists and sociologists still are. The view has been attacked on a variety of reasons, principally (1) that it is an unwarranted generalization about human motives (how do you know that all people, everywhere, past and future as well as present, are always egoistically motivated?) and (2) that it is a disguised tautology (if no egoistic motive has been found, we “have faith” that with further investigation it will yet be found, and thus no exceptions to it are permitted). Probably the most thorough systematic treatment of psychological egoism is contained in C.D. Broad's classic essay “Egoism as a Theory of Human Motives.” Also worthy of scrutiny are Joel Feinberg's “Psychological Egoism” and Alasdair MacIntyre's “Egoism and Altruism.”
Ethical egoism, unlike psychological egoism, is a substantive view of normative ethics, the view that each person should act in such a way as to promote his or her own self-interest (usually long-term self-interest). Different ethical thinkers have recommended and defended different patterns of life which were supposed to promote one's long-term self-interest. Ethical egoism has a long and distinguished history. The ancient Epicureans were ethical egoists, but held that the pursuit of the “higher pleasures” (primarily intellectual and aesthetic) would lead to personal happiness more than the pursuit of the more easily cultivated “lower pleasures” (food, drink, sex) which they thought would, in the end lead to frustration and unhappiness. The Stoics were on the whole ethical egoists, holding a pessimistic view about the possibilities for personal happiness in a decaying world but recommending apatheia (fortitude and indifference in the face of adverse fortune) as a formula for achieving whatever happiness was possible to man—a theme taken up again with embellishments by the nineteenth-century German philosopher Arthur Schopenhauer. Plato was not an ethical egoist, but he was a psychological egoist, and the problem of his Republic was to discover means by which it would become to the self-interest of each person to behave morally toward others, by invoking external sanctions (laws) to make it no longer worth one's while to harm others, as well as internal sanctions (conscience) to induce feelings of guilt at antisocial behavior. Christianity, too, is egoistic, to the extent that it promises rewards for behaving in ways that might not be to one's interest if one considered only one's earthly life.
That ethics in the twentieth century is on the whole antipathetic toward ethical egoism is not so much the result of Immanuel Kant's ethics as that of the eighteenth-century English bishop, Joseph Butler, whose Sermons remain today models of psychological perceptiveness and closely reasoned discourse. But writers on ethics today do not take ethical egoism in the “general” sense the Epicureans did, as an ideal of personal life, but rather as a rigid doctrine to the effect that one should never in any circumstances act contrary to one's own self-interest, and that apart from hope of personal advantage one never has the slightest reason to behave generously toward others—a position that might be called “extreme ethical egoism,” which would have been alien to the Greeks, and was characterized by Hume as the belief that there is no good reason to life one's finger to prevent the fall of a civilization.
Having stated ethical egoism in this extreme form, writers today have a field day knocking it down, before they then pass on to “better things.” A book-length attack on ethical egoism is Thomas Nagel's The Possibility of Altruism. It is in the periodical articles, however, that the arguments are most incisively presented, e.g., Richmond Campbell, “A Short Refutation of Ethical Egoism”; W.H. Baumer, “Indefensible Impersonal Egoism”; G.E. Moore, in Principia Ethica; Donald Emmons, “Refuting the Egoist”; J.A. Brunton, “Egoism and Morality”; Brian Medlin, “Ultimate Principles and Ethical Egoism.” Two brief anthologies are highly recommended as studies of contemporary ethical egoism: David Gauthier (ed.), Morality and Rational Self-Interest, and Ronald Milo, Egoism and Altruism.
The principal barrier to the acceptance of ethical egoism has been the concept of “the moral point of view”—the God's-eye point of view, that of the impartial and omniscient spectator who judges fairly on the basis of the interests of all parties (like a judge in court) rather than the interests of one individual. Ethical egoism, because it recommends action in accordance with the agent's own interest, in thus perceived as outside the pale of morality altogether. The principal author of this view (who, however, takes his cue largely from Kant) is Kurt Baier in his book The Moral Point of View, as well as in his article “Moral Reasons.” Also of note is Paul Taylor's article “On Taking the Moral Point of View.”
Nevertheless, ethical egoism has had some defenders. Jesse Kalin is the author of an influential article “In Defense of Egoism,” as well as in his article “On Ethical Egoism.” The defense of ethical egoism is the subject of book-length treatment by Robert G. OLSON, The Morality of Self-Interest, though his view that “the individual is most likely to contribute to social betterment by rationally pursuing his own best long-range interests” could be construed as utilitarian (egoism as a means toward doing what's best for everyone) as much as egoistic. Some objections to ethical egoism are attacked in John Hospers, “Baier and Medlin on Ethical Egoism,” and a positive version of ethical egoism is recommended in his “Rule-Egoism.” Henry Hazlitt's book The Foundations of Morality defends egoism with emphasis on voluntary cooperation as a means of expanding self-interest.
Utilitarianism has taken many forms in the twentieth century. John Stuart Mill defended it in its hedonistic form (“Act so as to maximize happiness”), but Hastings Rashdall in his remarkably fair and insightful book Theory of Good and Evil, a classic in normative ethics second only to Sidgwick's Methods of Ethics, stated what he called ‘ideal utilitarianism’: “Act so as to maximize intrinsic good,” leaving open the possibility that other things besides happiness were intrinsically good, thus placing the one controversy in theory of good and the other in theory of right.
“Maximize net expectable utility” has been the basic formulation of utilitarianism—the word “net” indicating that it is not the amount of good that counts but the amount left over after the bad has been considered: thus 10 units of happiness with 5 units of unhappiness is not as good as 6 units of happiness with no unhappiness. “Expectable” indicates that the action to perform is the one that could rationally (on the basis of the evidence available) be expected at the time of acting to produce the most good, not the one that actually did so, something one would have had to be omniscient to know in advance. Thus, if I drive a friend home and have an auto accident on the way, which is not my fault, but in which we both are injured, the act is not wrong, though it would be if one considered only the actual consequences. G.E. Moore's Ethics contains the clearest statement of utilitarianism, minus misleading formulations such as Mill's “greatest happiness of the greatest number,” an ambiguous formula because what produces the greatest total happiness may not produce it for the greatest number. (Jan Narveson's Morality and Utility is a thorough and highly recommended discussion of utilitarian ethics.)
A version of utilitarianism that has become current in the last four decades is rule-utilitarianism (so-called, in contrast with the version already discussed, act-utilitarianism). According to rule-utilitarianism, one should act in accordance with the rule of conduct which has the best consequences (highest net expectable utility), not the act which does. For example, finding an innocent scapegoat and sentencing him may stop a crime wave and restore faith in law and order and so on, but it would still be wrong to do this because the rule “Never condemn the known innocent” is a good one, indeed the best one possible in the circumstances, even though there may be exceptional occasions on which the act of letting someone go (a perpetual nuisance, guilty of previous crimes for which he has been let off, etc.) may have better consequences.
A great deal of controversy, however, has attended the status of rule-utilitarianism. Some writers have argued that rule-utilitarianism is reducible to act-utilitarianism. (1) A rule, such as “Never convict those who are known to be innocent,” would be improved from the utilitarian point of view if it were revised to read “Never convict the innocent except when doing so would promote more good than not doing so,” which is exactly what act-utilitarianism would advocate; and the same for any rule of conduct: “Do not break promises once voluntarily made” would become “Do not break promises except when breaking them would do more good than keeping them,” which again is act-utilitarian. (2) To encompass the complexity and individuality of actual situations, one might have to make rules extremely complex, with so many qualifications that each rule would apply to only one situation— in which case doing the “optimic” act (the act with the best consequences) would be identical with following the optimic rule.
There have been numerous attempts to state rule-utilitarianism in such a way as to prevent it from collapsing into act-utilitarianism. Of these attempts the most noteworthy are Jonathan Harrison's “Utilitarianism, Universalization, and Our Duty to Be Just,” and Richard Brandt's “Toward a Credible Form of Utilitarianism” (making the set of rules short enough to be learnable, andrelativizing them to the culture), but the philosophical literature is full of others. J.J.C. Smart defends act-utilitarianism against rule-utilitarianism in An Outline of a System of Utilitarian Ethics, and again later in Smart and Bernard Williams, Utilitarianism: For and Against.
There are at least three first-rate anthologies of readings on contemporary utilitarianism, which present the issues clearly, juxtaposing readings for and against various utilitarian positions so as to convey an exciting sense of confrontation of ideas: Michael Bayles (ed.), Contemporary Utilitarianism; Samuel Gorovitz (ed.), Mill's Utilitarianism: Text and Critical Essays; and Thomas Hearn (ed.), Studies in Utilitarianism. The many varieties of utilitarianism spawned in the twentieth century are well described and evaluated in David Lyons's comprehensive survey, Forms and Limits of Utilitarianism.
There are several logical consequences of utilitarianism (of whatever kind) that are sufficiently counter-intuitive to have led moral philosophers to seek not only modifications but the scrapping of the entire theory. Prominent among these are: (1) Aside from the difficulty (impossibility?) of interpersonal utility measurements and comparisons (how do I know that you are experiencing more pleasure or satisfaction than I am?), there is a moral problem which is best brought out in the example of killing. Ordinarily death is preceded by pain and distress, which of course counts as a negative in the utilitarian calculus; but painless death, particularly when one does not dread it because one doesn't know it is going to occur, is not. Thus, if a hundred persons were to be made somewhat happier by one person's death, and if that death were to be painless, and it seemed probable that if he remained alive he would have more unhappiness than happiness for the remainder of his life, it would seem that there is nothing in utilitarianism to prohibit killing him. This theme is developed most thoroughly in Richard Henson, “Utilitarianism and the Wrongness of Killing.”
(2) Another problem is that utilitarianism is too demanding: it does not make our usual moral distinctions between what is a positive obligation (something it is wrong not to do) with what is permissible (something that it is not wrong to do). “It would be a good thing if you did this” or “It would be wonderful if you did that” is not the same as “It is your duty to do that”—yet according to utilitarianism it is always our duty to maximize utility, and doing anything less than this would be wrong. If a man comes home after a day's work and has a beer in front of his television set, it is surely strange to say he is doing wrong, yet if his one duty is to maximize human welfare there are plenty of other things which he could be doing which would produce more good if he did them. This line of reasoning is forcefully pursued in the final chapter of Gilbert Harman, The Nature of Morality.
(3) Utilitarianism seems not to take account of the diversity of the sources of obligation. If you agree to pay a boy to mow your lawn and he has done his work and comes to collect his wages, should you pay him only if you can find no more good-producing things to do with your money? (See Richard Brandt, A Theory of the Good and the Right. )
There is an ambiguity in the word “duty” which is of relevance here. Originally it meant giving what is due (paying your dues, as it were); the presence of something due, such as a return of money borrowed, was always the result of a previous agreement. (See Joel Feinberg, Social Philosophy, p. 63. ) But in the last two centuries the word “duty” has come to mean anything at all which one is morally obliged to do, and in utilitarianism this means maximizing utility. Almost alone among contemporary moral philosophers, the British philosopher C.H. Whiteley (whose works are short and infrequent but always strike the jugular) put forth in his essay “Duties” the thesis that duties (in the current sense) always arise through prior agreement and that no man can be charged with any unchosen duties—a thesis rejected by most moral philosophers, and one which, in any case, it is impossible to reconcile with utilitarianism of any stripe.
Closely connected with the topic of rule-utilitarianism is that of utilitarian generalization, whose criterion of action is not “What would be the consequences if I did X?” but “What would be the consequences if everyone did X?” There is some such criterion implicit in certain interpretations of the Golden Rule, though as it stands the Golden Rule is considered by many to contain a serious defect, that its rule of action depends on what a person's desires happen to be: for example, “I should do to others as I wish them to do to me; so I should assist you in your criminal activities because I would wish you to assist me in my criminal activities.” R.M. Hare reinterprets the Golden Rule as a rule of impartiality: a person may make no exceptions on his own behalf: if it's wrong for others to do X, it's wrong for me, and it it's permissible for me to do X, it's permissible for others too (if their situation is similar in relevant respects). This position is amplified and defended in Hare's influential book, Freedom and Reason.
Kant's Categorical Imperative—“act so that the maxim of your action could become a universal law of human conduct”—contains a reference to generalization, though not of a utilitarian kind because Kant appealed to consistency rather than to consequences: Kant's ethics is not consequentialist, utilitarianism is. An attempt to combine the formalism of Kant with an appeal to utilitarian consequences is outlined in Marcus Singer's article, “Generalization in Ethics,” and amplified in detail in his book Generalization in Ethics. I should not do X if the consequences of (not my, but) everyone's doing X would be disastrous. But there are numerous counter-intuitive consequences of this position which must be dealt with, and Singer's discussion as well as the many commentaries it has stimulated attempt to deal with them. For example, (1) if everyone refrained from taking up arms, there would be no need for me to do so; but since we live in a world in which violent persons exist, should I ignore this fact and adapt my actions only to the situation in which everyone obeys the rule? Shouldn't one's ethics be geared to the real world rather than an ideal world? But in that case what happens to the generalization test? (2) Often it is perfectly plain that not everyone will do X, and should I do X (follow a good rule) even knowing that many or even most people will not do so? Might not my conformity be an exercise in futility in that case? Yet if I should refrain from following a good rule simply because I know or believe that others will not, many good rules would never be followed, they would never even get off the ground.
These problems have been argued back and forth most fruitfully and challengingly by A.K. Stout in “But Suppose Everyone Did the Same,” by A.C. Ewing in “What Would Happen if Everyone Acted like Me?,” by Colin Strang in “What if Everyone Did That?,” by Kurt Baier in The Moral Point of View,41 and by Don Locke in “The Trivializability of Universalizability.”
Theories of normative ethics are divided into consequentialist and deontological. Both egoism and utilitarianism are consequentialist theories: in egoism one considers the consequences of the act to oneself, in utilitarianism one considers the consequences of the act to everyone affected by it. But deontological theories are not consequentialist: they do not base the rightness of actions entirely on consequences (they may do so partially), not even rationally expectable consequences.
The ethics of Kant is the purest example of deontological ethics, officially divorced entirely from consequences (though it has been argued that there is a covert appeal to consequences in many of the examples Kant himself gives). Kant's first moral law was the Categorical Imperative: “Act so that the maxim of your action could become a universal law of human conduct.” Some of the logical consequences Kant drew from this—for example, that one is never justified in telling falsehoods—have seemed so implausible that virtually no ethical thinker today accepts this aspect of Kant's ethics without emendation. (If the secret police are trying to find your friend, are you not justified in lying to them so that your friend will not be found?) Kant's second moral law,” Treat every person as an end not as a means,” has been subjected to less criticism by contemporaries, but there is considerable vagueness in what is meant by the phrase “treating someone as an end.” The most thorough-going discussion of this issue is in Respect for Persons by R.S. Downie and Elizabeth Telfer.
The most systematic defense of Kantian ethics in our century is H.J. Paton, The Categorical Imperative. A clear and simple presentation of Kant's ethics is H.B. Acton, Kant's Moral Philosophy. A brief but admirably lucid descriptive and critical account is W.D. Ross, Kant's Ethical Theory. The best anthology of readings on Kant's ethics is Robert Paul Wolff (ed.), Kant: Foundations of the Metaphysics of Morals, Text and Critical Essays, which contains Kant's text as well as some of the best contemporary essays on it.
Trenchant criticisms of central tenets of Kant's ethics are Alasdair MacIntyre, “What Morality is Not,” in which the author argues that moral judgments are not necessarily universalizable (saying “I ought to do X” does not imply that anyone in exactly my circumstances ought to do X); and Philippa Foot, “Morality as a System of Hypothetical Imperatives,” in which she argues that categorical imperatives are not required in ethics and that hypothetical imperatives (“If you want this, then do that”) can be made to do justice to ethics just as well. Richard Taylor, in his penetrating and perceptive book Good and Evil, argues against Kant (1) that general rules are useless in ethics (when they enjoin an act one considers wrong, one simply scraps or amends the rule), and (2) that morality is based not on reason but on man's conative nature (his wants and desires) and that ethics is a matter of adjudicating the conflicting wants of different people.
Most presentations of deontological ethics combine utilitarian with Kantian features. For example, Sir David Ross, in what is probably the most famous and commented-on system of deontological ethics in this century, The Right and the Good,7 does not agree with Kant that it is always wrong to break a promise, but neither does he find utilitarianism correct in believing that whether or not one should keep a promise depends entirely on the good or bad consequences of keeping it. There lies in the fact of having made a promise a prima facie obligation to keep it, and thus the obligation to keep it does not depend solely on the good consequences that ensue from doing so. Ross conceives of a heterogeneous collection of prima facie duties, most of which arise not from the hoped-for benefit of producing certain consequences but from certain events taking place in the past. For example, I have promised a certain person that I will do a certain thing, and the fulfillment of the promise is owed to that person (I did not promise anyone else), and it is because of a past event, my making the promise. There are thus special duties to special people for special reasons, not merely the general duty of beneficence as envisaged by utilitarianism.
There is, for example, (1) the duty of fidelity; if I have made a promise, it would be wrong of me to break it just because I had reason to believe that I might do more good by breaking it (not paying an employee but giving it to charity instead); (2) the duty of reparation, i.e., the duty to make restitution for damages and injuries I have caused; (3) duties of gratitude, to help first those persons (such as parents) who have helped me; (4) duties of justice, to effect a proportionality between reward and desert; and (5) duties of self-improvement, to realize more fully one's own potentialities. Difficult as it is to weight these against each other, they must all be considered in deciding what acts one should perform; no attempt to reduce all these duties to the utilitarian model will suffice, since many of one's duties are (1) past-looking (based on previous acts) rather than future-looking (with an eye only to consequences), and (2) to special people for special reasons rather than to mankind in general.
Ross's highly influential work has led to much discussion and some criticism, particularly about his methodology in arriving at it; but it has largely sufficed to crack any convictions one has about the sufficiency of utilitarianism as the sole means of solving all the problems of normative ethics. But it is only one kind of approach toward breaking the back of utilitarianism. Other deontological views are presented in W. Hudson, Ethical Intuitionism; Alan Donagan, The Theory of Morality; Alan Gewirth, Reason and Morality; A. C. Ewing, Second Thoughts on Moral Philosophy.
Other approaches consist in an explication and defense of the concepts of justice and rights.
There are occasions when the best consequences (more good for more people) might occur if individuals were treated in certain ways which are called violations of their rights. For example, an innocent person may be convicted in order to stop a crime-wave and restore faith in law and order; persons may be taken to slave labor camps to provide cheap labor for the State where private enterprise is prohibited; crimes may be solved at the expense of bugging and wiretapping and other violations of privacy; a person may be tortured in order to make him reveal classified information. And yet, we are inclined to say, it would be wrong to do these things, i.e., to violate these rights. There is much controversy in contemporary ethics about rights, their basis, their nature, and their scope, and, most of all, what is to be done when they conflict.
Feinberg defines a right as a valid claim which one person has against others: if I have a right to my car, I have a valid claim upon you not to use or damage it without my permission. Herbert McCloskey defines right in terms of an entitlement: “rights are entitlements to do, have, enjoy, or have done”—whether or not one lays claim to or exercises the right.
There is more disagreement about the basis of rights than about any issue on this topic. (1) Some, such as Gregory Vlastos, hold that the concept of rights is rooted in a conception of universal human worth, as opposed to merits, for the person may have little or no merit and one should still respect his rights because he is a human being. (“If I see a stranger in danger of drowning, I am not likely to ask myself questions about his moral character before going to his aid; my obligation here is to a man, to any man in such circumstances, not to a good man... No one has the right to be cruel to a cruel person. His offense against the moral law has not put him outside the law; he is still protected by its prohibition of cruelty—as much so as are kind persons.”) (2) Others base rights on man's rationality; however, some persons appear to be more rational than others, and are they therefore more worthy to have their rights respected? Has a congenital idiot no rights? Some animals appear to be more rational than some people—but much depends on what is meant by the slippery word “rational.” (3) Others have held that the basis of human rights lies in our common human vulnerability, especially our liability to pain and suffering. Since this feature is shared by the animal kingdom, it would appear that animals too have rights. Some persons are more sensitive to pain than others; do they have more rights? And certain forms of treatment which violate rights might cause no pain or suffering at all, such as the painless murder of an innocent man. (4) Or it may be held that “universal respect for human beings is, in a sense, groundless—a kind of ultimate attitude not itself justifiable in more ultimate terms. The parent's unshaken love for the child who has gone bad may not be a response to any merit of the child (he may have none), or to any of his observable qualities... ‘I am his father after all’ may explain to others an affection that might otherwise seem wholly unintelligible, but that does not state a ground for the affection so much as indicate that it is ‘groundless’, but not irrational or mysterious for that.”
There are various overlapping classifications of rights. There are legal rights which can be claimed in the legal system (which varies from nation to nation and state to state), as opposed to moral rights, those which ought to be upheld by a State even if they are not. There are specific rights, such as your right to collect $100 from me at an agreed upon time if I have borrowed it from you, a right which does not extend to non-lenders, as opposed to general rights, e.g., the right to life and liberty, possessed by everyone because of some feature (not everyone agrees on which) of their common human nature. There are (real or alleged) positive rights, the rights to claim or possess goods and services produced by others (e.g., the right to a minimum standard of living, the right to a paid vacation, the right to decent conditions of work, etc.), as opposed to negative rights, e.g., the right to conduct one's life peacefully and free of aggression, which demand no positive action by others but only forbearance from others in the conduct of one's own activities. There are substantive rights, e.g., rights to liberty and property, as opposed to procedural rights, e.g., the right to a fair trial, the right not to be incarcerated without knowing the nature of the charge against one, which have to do with the procedures by which substantive rights can be implemented or are more likely to be achieved.
Only a few writers, such as Ayn Rand in her essay “Man's Rights,” take the position that all rights are negative rights, requiring only non-interference by others rather than positive action. Thus the right to life is the right to conduct one's life according to one's own plan as long as one does it non-coercively, without violating the rights of others. All rights, on this view, are rights to take certain actions, not rights to things; the right to property is not the right to take property but to take such action as is required to earn it. Most writers take the position that each person has a right to certain positive benefits from others, and then the controversy centers around the question of how extensive these benefits shall be, and which of life's goods each person has a right to, especially when granting one good means doing without another. Herbert McCloskey85 argues that each person in the world has a right to the equivalent of three meals a day, decent housing, leisure, etc. To the objection that in some areas it is impossible to produce all this for each person, he replies simply that every person has a right to these things but there is no way (at the moment) to implement this right in practice.
After decades of comparative neglect, the topic of rights is once again in the forefront of ethical controversy. One of the best brief discussions of rights (both legal and moral), which takes no strong position but lists arguments clearly and provides a convenient taxonomy of the subject, is to be found in Joel Feinberg, Social Philosophy, Chapter 4-6. Two short paperback anthologies, which contain some of the clearest and most provocative essays on the concept of rights and problems encountered in trying to adjudicate conflicting rights-claims, are David Lyons (ed.), Rights, and Abraham I. Melden, Human Rights. Other discussions of various aspects of rights are Alan Gewirth, Reason and Morality, A.I. Melden, Rights and Right Conduct, and the same author's Rights and Persons. In a legal context, see Edmund Cahn, The Great Rights.
The British philosopher Jonathan Glover defends utilitarianism against all alternatives, even in the face of such arguments about rights as have been summarized here (as well as many others), in a fascinating book, full of examples and case histories, Causing Death and Saving Lives. A similar book, not quite so wedded to utilitarianism but equally permeated with thought-provoking examples, is John Mackie, Ethics.
There is considerable ambiguity in the formulation of specific rights: for example, there is a minimal interpretation of “the right to life,” which interprets it simply as the right not to be murdered: thus a totalitarian dictatorship in which the State owned and totally governed everyone, but did not kill anyone, would respect each individual's right to life according to this interpretation. On the other hand, the maximal interpretation of the right to life construes it as the right to live one's life in accord with one's own voluntary decisions, as long as one does it non-coercively (so as not to interfere with the right to life of others), and in this interpretation the right to life includes other rights such as that of property and freedom of expression.
Property rights have been a subject of controversy ever since John Locke made “mixing one's labor with the land” the basis of property rights in his Second Treatise concerning Civil Government, and Herbert Spencer clarified and expanded some of Locke's points in his Social Statics (1848). But many problems beset property rights, chief among them being how the claims of property are to be adjudicated against other claims: does owning a house give you the right to play loud music and disturb neighbors? to raise poisonous snakes or engage in dangerous activities? to exclude police who are searching for a criminal? to walk about nude even if it offends your neighbors? to build carelessly on a hill so as to endanger houses below in case of rain or mud-slides? and so on endlessly. Perhaps the most careful and judicious of current books on this subject is Laurence Becker, Property Rights. A popular brief anthology of readings on property rights is Virginia Held (ed.), Property, Profits, and Economic Justice, a group of readings with a collectivist orientation.
Articles on some aspect or other of rights constantly appear in such journals as Ethics and Philosophy and Public Affairs. John Rawls's tremendously popular book, A Theory of Justice, lists many thins to which each individual is alleged to have a right, but since some of them work against one another in practice—e.g., the right to liberty and the right to a “reasonable” standard of living—each right is prima facie rather than absolute: for example, the right to liberty is often at odds with the right to “a reasonable standard of living,” since those who do not provide the latter for themselves must receive it at the expense of the earnings of others, thus compromising their liberty. Rawls goes so far as to say that in order to assure the right to welfare for everyone, and to be sure that private individuals and corporations do not exploit the populace, the State should own most of the materials of production. Rawls's book alone has provoked many hundreds of articles and commentaries, including other books such as Brian Barry's The Liberal Theory of Justice, which is conceptually acute but complains that Rawls gives too much latitude to individualism! More recently, Ronald Dworkin's celebrated book, Taking Rights Seriously, does not take most rights very seriously at all, and is concerned almost exclusively with procedural rights within a judicial system; as far as substantive rights are concerned, his position is difficult to distinguish from utilitarianism. Only Tibor Machan, in his book Human Rights and Human Liberties, makes a case for human rights in the pattern of Ayn Rand's “Man's Rights,” although H.L.A. Hart's thesis that there is but one natural right, “the right to be free,” could be said to have the same implications.
Closely associated with the issue of rights is that of justice. In a utilitarian calculation, a hundred persons may benefit from the death of one person, and might be justified in putting an end to his life if they benefit sufficiently from his death, and yet such an act would not only be a violation of his right to life, it would be unjust. The two concepts overlap, and one of the five meanings of the term “justice” put forth by Mill in Chapter 5 of his Utilitarianism is a violation of rights. Yet the two concepts are different: rights are like a no-trespassing sign beyond which one may not go in his treatment of other human beings; justice has the connotation of fairness, and many acts and situations can be unfair without violating a person's rights, such as leaving one of your children unprovided for in your will and leaving everything to the other child.
In dealing with justice, it is important to distinguish individual justice from comparative justice. Individual justice is the apportionment of treatment to desert, already discussed. Comparative justice is not collective justice as might be supposed; “collective justice” is a contradiction in terms, since justice is by its nature individualistic: if a person has been unjustly treated it is no mitigation of the injustice that other members of his race or class or club have been justly treated. Comparative justice is justice in treatment in relation to others: thus, if one person has received the penalty he deserved for an offense, but two other persons who committed the same offense have been let off, his treatment is still just (since he got what he deserved) but unjust in relation to theirs, since they were unjustly let off. Feinberg's treatment of justice (Chapter 7 of Social Philosophy) brings out this and other distinctions in the concept of justice with great clarity. See also Otto Bird, The Idea of Justice.
The central concept of justice is that of treatment in accord with desert. Just punishment is getting the punishment one deserves, just reward is getting the reward one deserves. There are different views of what one deserves, of course, and thus different persons would disagree on whether a specific act or outcome was just, not because they disagreed with the definition of justice, but because they have a different concept of which outcomes are deserved. Some would say for example, that the only criterion of a just wage is (1) the degree of achievement in it, just as grades in courses are measured; some say that a more proper criterion is that of (2) amount of effort expended, regardless of how much is achieved by that effort; some say that the proper criterion is (3) equality—that all distributions should be equal regardless of the amount of effort or achievement (in spite of Aristotle's dictum that justice consists in rewarding equals equally and unequals unequally); some even say that deserts should be based on (4) need, as a supplement to or a substitute for the previous criteria (or, sometimes, that justice consists in fulfilling needs, and has nothing to do with desert).
In a free-market system, of course, wages must be based on achievement: those who need a job most are those likely to be least equipped to fulfill it; if everyone received equal wages there would be no motivation to rise or to do well (Edward Bellamy's nineteenth-century classic Looking Backward to the contrary notwithstanding); and if wages were based on effort, those who could with great effort type 5 words a minute would receive higher wages than those who could do 90 words a minute with ease. But most contemporary writers on justice allow the free market only a very limited scope.
The most influential contemporary book on justice is John Rawls's A Theory of Justice.99 In addition to the limitations on the free market previously mentioned in discussing Rawls, the operations of the market are to be constrained by the “maximin principle,” according to which any innovations, such as an invention, might not be instituted or permitted to exist if anyone whatever is the worse off for it: it must benefit the “least advantaged persons” in society along with the persons involved in inventing it. Thus the automobile should benefit even the buggy manufacturer whose profession was replaced by the invention of the automobile. One wonders how many useful innovations in a society would pass that requirement. Nor is it ever discussed how the “least advantaged” came to be so—whether through no fault of their own, or through spending beyond their income, etc.—an important point which is carefully discussed in Herbert Spencer's The Man vs. the State in 1884.
Rawls's is a contract theory of justice: society is to be governed by such rules as would be agreed on by everyone in advance of knowing his or her particular position in it (“the veil of ignorance”) Thus no one would vote for racist legislation from behind the veil of ignorance, since he might turn out to belong to the race that was legislated into inferiority, nor would he favor anti-labor legislation because he might turn out to be a laborer, and so on. The veil of ignorance device is apparently designed to guarantee impartiality, but one could speculate on what is left of this hypothetical voter once he is ignorant of not only the particular facts of his situation, but the sex and race he belongs to, the era in which he lives, and the temperament he has.
There have been many hundreds of reviews and commentaries on Rawls's work. The clearest and most thorough is that of R.M. Hare in a two-piece article in the Philosophical Quarterly. Brian Barry's The Liberal Theory of Justice100 is the most comprehensive book-length analysis of Rawls's work. The most incisive criticism is by Wallace Matson, “What Rawls Calls Justice.” Most of his critics would impose more rather than fewer restrictions on the operations of the market than Rawls does.
A pair of influential books on justice in a distributive context are by Nicholas Rescher,Distributive Justice109 and Welfare. Rescher argues the insufficiency of utilitarianism: first of all, because the greatest welfare may accrue to the smallest number (utilitarianism contains no principle of distribution); second, a utility “floor” is needed—there should be a minimum of material welfare below which no one is permitted to go; third, each person's share in a utilitarianism has nothing to do with desert, since desert is a “past-looking” concept based on past performance and not future potential, and utilitarianism looks only to maximizing good future consequences. The upshot of his discussion is not all that different from Rawls's, though it is arrived at by quite a different route; at any rate, the recommended interventions in the operation of the free market are as numerous in Rescher as in Rawls. In both cases the discussion is primarily concerned with how wealth is to be distributed, with no mention whatever of how it is to be produced; nor is any moral question raised about the legitimacy of transferring wealth from producers to non-producers without the producers' consent.
The free market as a mechanism for achieving justice is almost, but not quite, ignored in the literature. The most remarkable chapter of F.A. Hayek's The Constitution of Liberty, “Equality, Merit, and Value,” defends the justice of the free-market system, justifying the fact that in a free market we trade with others on the basis of what they produce that we desire to have, rather than on their moral merit or degree of effort expended in doing so. Defending the free market also are two essays, “Welfare and Government” and “Free Enterprise as the Embodiment of Justice,” both by John Hospers; and Irving Kristol treats capitalism favorably in his discussion of justice. Far more influential in academic circles, however, are books such as Norman Bowie's Toward a New Theory of Distributive Justice, in which the demands of justice are not satisfied by anything short of a full-fledged socialist state. In Michael Bayles's recent book Principles of Legislation, the laws regulating the market are so all-pervasive that it is doubtful whether a market economy could long survive all the recommended restrictions. Perhaps the most extreme example of all is an article by Kai Nielsen, “On Justifying Revolution,” in which the author argues that (1) The U.S. is a capitalist nation, (2) capitalism means exploitation of the poor by the rich, (3) the power of the state must be invoked to correct this situation and turn the U.S. into a just (Marxian) society, and (4) if this is not done quickly, violent revolution is called for to overthrow the capitalists, by assassination if necessary.
The concept of equality is customarily treated together with that of justice. But equality must be equality of something: equality of income, equality of happiness or pain, equality of opportunity, etc. The only kind of equality embedded in the American political structure is that of “equality before the law”: e.g., if two persons are convicted of equal offenses (including all the conditions, such as motivation and extenuating circumstances) there should be equal penalties—one person should not be treated differently from another because of his race or creed, or because he is a friend or relative of the judge, or for other reasons that have nothing to do with the merits of the case.
The principal focus of discussions of equality in the latter half of the twentieth century, however, is not equality before the law (its truth is taken for granted) but rather (1) equality of income and (2) equality of opportunity. There are problems, however, with both of these. To maintain equality of income would require such a constant process of enforcement (taking away from those who had worked harder or made wise investments, to give to those who had not) that a full-fledged police state would almost necessarily be the result. Moreover, before long all incentive would disappear and what would be left would be equality of poverty, or “splendidly equalized destitution.” Equality of opportunity is a more plausible goal; yet if complete equality of opportunity is demanded, and not merely a chance to achieve, it too would require not only extensive legislation calling for enforced redistribution of income, but ultimately also the abolition of the family, for as long as some children have parents who provide a congenial atmosphere for their development and others have parents who do not, an inequality of opportunity results which is far greater than any inequality that is brought about by unequal incomes.
The best book-length treatment of the concept of equality is John Wilson's Equality. A popular current paperback anthology on the subject, particularly in the context of government “affirmative action” programs, is Equality and Preferential Treatment, edited by Marshall Cohen, Thomas Nagel, and Thomas Scanlon. The most conceptually penetrating, as well as fair and balanced, pair of essays on the subject (published together), both entitled “Equality,” are by Richard Wollheim and Isaiah Berlin. Thrusting in the direction of equality of income for everyone regardless of merit or qualifications are Bernard Williams, “the Idea of Equality,” and Hugo Bedau, “Radical Egalitarianism.” A powerful counterargument to these pieces is contained in two trenchant essays by the British philosopher J.R. Lucas, “Against Equality” and “Against Equality Again,” as well as in an equally rousing piece by another British philosopher, Antony Flew, “Justice or Equality?”
The most controversial application of the concepts of utility, rights, and justice occurs in the theory of punishment. When a legal offense has been committed, what considerations are to determine what punishment, if any, should be meted out to the offender, and what general criteria are to be used for such a determination?
The most time-honored view of punishment, as well as apparently the one most deeply rooted in human nature, is the retributive. Retribution need not mean “an eye for an eye”; it need not imply any view about the punishment being similar to, or a mirror-image of, the crime—these are special twists that have been attached historically to the retributive view. All that is essential to the retributive (or deserts) theory of punishment is that (1) punishment is because of, not in order to—in other words, what justifies punishing is a misdeed committed in the past, and that (2) punishment should be in accord with desert (in other words, it should be just). First and foremost, a judge pronounces a sentence because the person has been convicted of a crime, not in order to improve the prisoner or society. Lucid defenses of the retributive view in the twentieth century include: W.D. Mabbott, “Punishment”; C.S. Lewis, “The Humanitarian Theory of Punishment”; H.J. McCloskey, “A Non-utilitarian Approach to Punishment,” Herbert Morris, “Persons and Punishment.”
A more modern and allegedly enlightened view of punishment is the utilitarian: according to this view, the purpose of punishing is to do some good—the past cannot be changed, and it is immoral to punish if no good can come of such action. The good to be achieved (or evil to be avoided) by punishing, however, falls in different ways upon different groups: (1) there is good to be achieved for the offender, namely rehabilitation; (2) the good of potential or would-be offenders, namely deterrence; (3) the good of society in general, namely protection against criminals. Some writers call this the deterrence theory of punishment, though deterrence is only one facet of the utilitarian theory of punishment. Moreover, some of these factors work against others: a murderer is more likely to be a model prisoner than a petty thief is, and society is less endangered by a one-time killer who has no impulse to repeat his act than by a recurring minor trouble-maker. Punishing him may deter others but will neither rehabilitate him (since he needs no rehabilitation) nor protect others (because they need no protection from him). Moreover, to the extent that deterrence is a goal of punishing, it can often be accomplished just as well by punishing the wrong person instead of the right one: punishing an innocent scapegoat can stop a crime wave and restore people's faith in law and order.
A common utilitarian recommendation, treating a prisoner psychiatrically rather than imprisoning him, sounds humane but has seldom had favorable results in practice. This is not only because most criminals are not greatly helped by attempts at psychiatric treatment, but because the psychiatrist in charge is in a position to play God with other people's lives, moulding them into his image of what they should be and refusing to release them until his aims for them have been achieved. And if treatment is continued until “cure” is pronounced, the sentence can be indeterminate: a person convicted of petty theft may be incarcerated for years under conditions of coercive “therapy” so severe that he would prefer a determinate prison sentence to indeterminate treatment. (See Jessica Mitford, Kind and Usual Punishment. )
Most sociologists and penologists tend to be utilitarians with regard to punishment, and to assume the truth of utilitarianism as axiomatic rather than arguing for it. Defenses of the utilitarian theory of punishment include: Robert Waelder, “Psychiatry and the Problem of Criminal Responsibility,” advocating “complete elimination of the concept of retribution”; Bertrand Russell, Roads to Freedom; B.F. Skinner, Science and Human Behavior; and Karl Menninger, “Therapy, Not Punishment.”
A third theory of punishment, much less frequently discussed in the current literature, is the restitution theory, whose focus is not only the offender or on society in general but the victim: the principal purpose of punishing should be to attempt to restore to the victim (or his family) the values lost by the criminal act. The view is not a new one—ancient penal codes, such as that of Hammurabi, appear to have been largely restitute in nature—and restitution is currently a major factor in settling civil suits, but as a recommendation for all punishment as an alternative to the other theories it has gained popularity only quite recently, e.g., in Randy Barnett's essay “Restitution: a New Paradigm of Criminal Justice” in Assessing the Criminal, and several other readings in the same volume.
Many views of punishment incorporate features of all the above theories. Rawls, for example, holds that utilitarian considerations should apply to legislators in formulating laws but that retributive consideration should apply to judges pronouncing sentence. The Swedish philosopher Alf Ross has developed this kind of view at some length in his book On Guilt, Responsibility, and Punishment. See also Hyman Gross, A Theory of Criminal Justice. One problem with mixing the views, however, is that when applied they sometimes are in conflict with each other: murder, the severest of crimes, should be punished most severely according to the retributive theory, but if the utilitarian theory is employed, in many cases, no good might be accomplished by punishing at all; and whether restitution should be made would depend on whether the deceased has any heirs.
A survey of all these theories of punishment, with arguments defending and attacking each, is presented in “Punishment, Protection, and Retaliation” by John Hospers. The literature of punishment is so enormous as to defy summary, and every anthology of readings in ethics includes some essays on the subject. The Appendix to John Kleinig's book Punishment and Desert contains a detailed and comprehensive bibliography on punishment which is recommended above all others, together with the author's analysis of the concept of desert, which is fundamental to the retributive theory. Another good analysis of desert is given in Edmund Pincoff's essay “Are Questions of Desert Decidable?” An unusually penetrating anthology of original essays on punishment is Philosophical Perspectives on Punishment, edited by Edward Madden, Rollo Handy, and Marvin Farber—particularly Brand Blanshard's essay “Retribution Revisited.”
When is a person morally responsible for his actions? The adjective “responsible” is derivative from the verb “respond”—and degree of responsibility has something to do with a person's ability to respond, e.g., to moral reasoning and the citing of relevant facts. Because of their inability to respond in the appropriate way, we do not hold animals, infants, or insance persons responsible. But the issue is more complex than this, of course; the best book-length treatment of various aspects of the problem is Jonathan Glover, Responsibility. See also Frederick Vivian, Human Freedom and Responsibility. Aspects of the general problem of responsibility are ably discussed in Joel Feinberg, Doing and Deserving; the best brief anthology of contemporary readings on the subject is Gerald Dworkin (ed.), Free-will and Moral Responsibility.
Of the main conditions alleged to absolve a person from moral responsibility—ignorance, coercion, and “mental disease”—(1) the best discussion of ignorance as excusing (or non-excusing) is still H.A. Prichard's classic essay “Duty and Ignorance of Fact.” In a legal context, the status of ignorance is discussed by Jerome Hall in his essay “Ignorance and Mistake.” (2) The concept of coercion by other persons, and its relation to moral responsibility for action, is discussed in Chapter 9 of F.A. Hayek's The Constitution of Liberty and in Robert Nozick's essay “Coercion,” and is frequently referred to in various essays already alluded to. (3) The relation of mental disease (or “mental defect”), including insanity, to responsibility is discussed from varying points of view in the selections in the chapter on insanity in Herbert Morris (ed.), Freedom and Responsibility, as well as in Joel Feinberg and Hyman Gross (eds.), Philosophy of Law. Thomas Szasz argues against the concept of insanity in The Manufacture of Madness.
The word “freedom” has many meanings, and is commonly divided (among other ways) into “freedom from” (coercive acts of others) and “freedom to” (what one can do if one chooses). The implications of this distinction are clearly pursued in Isaiah Berlin's classic essay “Two Concepts of Liberty” and further distinguished in Chapter 1 of F.A. Hayek, The Constitution of Liberty.109 The concept of freedom is clearly discussed, and with even greater subtlety and complexity, in Chapter 1 of Joel Feinberg, Social Philosophy. The main book-length discussions of the concept of freedom are Felix Oppenheim, Dimensions of Freedom, and Harold Ofstad, The Freedom of Decision. An unusual and interesting collection of readings on freedom, including some Continental essays not often encountered in the world of Anglo-American philosophy, is Albert Hunold (ed.), Freedom and Serfdom.
The relevance of the metaphysical problem of determinism and free-will (not the same as “freedom” in any ordinary sense) is, to say the least, not obvious. Many persons contend that all that ethics requires is that decisions have consequences in action, which is obviously the case, and that issues of determinism should be relegated to metaphysics. Some of these possible relations are explored, however, in Sidney Hook's anthology Freedom and determinism in an Age of Modern Science, and very perceptively in Elizabeth Beardsley's essay “Determinism and Moral Perspectives.”
That people in different tribes and cultures have different moral rules was known to historians even before Herodotus reported on his visits to the Egyptians and Phoenicians. In the twentieth century anthropologists have covered the world with reports of varying moralities and traditions. Conspicuous among these was Edward Westermarck's Ethical Relativity and Graham Sumner's Folkways. But it is not clear from all these data what has been proved. It is clear that there is cultural relativity of rules: one tribe has strict rules against theft, another has rules only against getting caught. But has relativity of fundamental moral principles also been proved? It might well be the case that different rules of behavior are needed in adaptation to different conditions: for example, in a desert society it would be a crime to waste water, but not in a water-affluent society; in a pre-refrigeration age it would be dangerous to eat pork that isn't freshly slaughtered, but today such a prohobition would be pointless. The survival of the group could be a unifying principle explaining and justifying the diversity of rules; so could the maximization of the equality of life of individuals within the group.
In any case, ethical relativism, as opposed to cultural relativism, holds that there is no one justifiable set of moral principles for everyone—and when one is concerned only with rules, this seems undeniable, as the example of water in the desert society illustrates; but many would argue that ethical relativity of fundamental principles cannot be justified. This is argued for example by W.T. Stace in The Concept of Morals, by the anthropologist Ralph Linton in “Universal Ethical Principles: an Anthropological View” and by Kai Nielsen in “Ethical Relativism and the Facts of Cultural Relativity.” Many rules of conduct, in any case, are not properly considered moral rules at all but simply unquestioned taboos. (See Martin Lean, “Aren't Moral Judgments Factual?” )
The most noteworthy recent book-length discussions of ethical relativism include: Abraham Edel, Ethical Judgment; Paul Taylor,Normative Discourse;144 John Ladd, The Structure of Moral Codes; John Ladd, (ed.) Ethical Relativism; Bernard Gert, The Moral Rules. Attacks on ethical relativism in favor of ethical absolutism include Judith Jarvis Thomson, “In Defense of Moral Absolutes”; Brand Blanshard, “The New Subjectivism in Ethics”; and Renford Bambrough, “A Proof of the Objectivity of Morals.” A balanced and judicious weighing of the factors pro and con in the issue of relativism in contained in the chapter “Ethical Relativism” in Richard Brandt's Ethical Theory.
In addition to the above, there are many topics in normative ethics to which separate articles and books have been addressed. Among those most interesting to students of ethics are the following:
1. Medical ethics. The hottest subject in ethics these days, medical ethics has received extensive treatment. It includes such questions as, should the patient-physician relation be legally priviliged as the lawyer-client relation is? Under all conditions? Should medical researchers be free to experiment with genetic material to produce new types of human beings? Should brain surgery be permitted to correct psychological disorders? What should be the medical profession's attitude to population control? What are the moral and legal implications of putting people into cold storage after death, and resurrecting them after a century or two? A collection of excellent and challenging essays on these and other issues, taken from medical journals as well as philosophical ones, and superbly organized, is Ronald Munson (ed.), Intervention and Reflection: Basic Issues in Medical Ethics. Another is Ethics in Medicine, (eds.) Stanley Reiser, Arthur Dyck, and William Curran.
2. Abortion. This troublesome topic, with one foot in metaphysics (when does the fetus becomes a human being?) and another in ethics (is it better to have an abortion than to have a child with genetic defects?), has received much attention in the literature. There are two brief but excellent collections of readings on this subject, each representing diverse points of view: Joel Feinberg (ed.), The Problem of Abortion and Baruch Brody (ed.), Abortion and the Sanctity of Human Life. Individual essays on the subject constantly increase in quantity: any volume of the quarterly journal Philosophy and Public Affairs (published by Princeton University Press) exhibits additional pieces on this as on all of the following subjects. Glanville William's The Sanctity of Life and the Criminal Law discusses legal as well as moral aspects of abortion, euthanasia, and related topics.
3. Euthanasia, both passive (allowing people to die) and active (intervening so as to cause their death). A good introduction to the issue is “Active and Passive Euthanasia” by James Rachels. The section on euthanasia in Munson, Intervention and Reflection,172 contains the best brief collection of papers on this subject, as well as a bibliography. See also Onora O'Neill and William Ruddick (eds.) Having Children and Bonnie Steinbeck (ed.), Killing and Letting Die.
4. Morality and the law. When should the law be disobeyed? What is to be done in conflicts between the authority of law and the authority of conscience? The State has power, but has it authority? See Mortimer Kadish and Sanford Kadish, Discretion to Disobey; Lon Fuller, The Morality of Law; R.S. Downie, Government Action and Morality; Peter Singer, Democracy and Disobedience. Three anthologies of contemporary essays on the subject are especially provocative: Hugo Bedau (ed.), Civil Disobedience; Norman Care and Thomas Trelogan (eds.), Issues in Law and Morality; and Richard Wasserstrom (ed.), Morality and the Law. Robert P. Wolff, In Defense of Anarchy, defends anarchy only in theory. An excellent anthology, containing historical selections (Plato, Aristotle, Hobbes, Rousseau, Locke) as well as contemporary essays, including some by anarchists, is Freedom and Authority, ed. Thomas Schwartz. A brief expository textbook summarizing historical and contemporary views is Norman Bowie and Robert Simon, The Individual and the Political Order, with numerous clarifying discussions though a strong socialist bias.
5. The treatment of animals. The extent to which we ought to consider the effects of our actions on animals, and the consequences for human conduct, are interestingly explored in a number of recent works. Peter Singer's Animal Liberation goes so far as to argue that it is immoral to consume animal flesh. A balanced discussion of the subject is Stephen L. R. Clark, The Moral Status of Animals. An anthology of readings on the subject is Animal Rights and Human Obligations, (eds.) Thomas Regan & Peter Singer. See also Joel Feinberg, “Human Duties and Animal Rights.”
6. Ethics and business. Two anthologies of essays on the special application of ethics to business enterprises are Richard DeGeorge and Joseph Pichler (eds.), Ethics, Free Enterprise, and Public Policy, and Thomas Donaldson and Patricia Werhane (eds.), Ethical Issues in Business.
7. War and morality. Whether war is ever justified, and what conditions must be met in order for one to participate in a “moral war,” is the subject of several recent anthologies: Richard Wasserstrom (ed.), War and Morality, and M.M. Wakin (ed.), War, Morality, and the Military Profession, of which the first half is devoted to ethical matters related to war and the second half to the specifics of military strategy and their ethical implications. Thomas Nagel's “War and Massacre” is a good introduction to the issue, as well as suggesting solutions to some of the problems.
8. Suicide. The English Debate on Suicide contains the principal arguments on the matter in last three hundred years. Since ending one's own life is generally agreed today to be each person's right, the issue is now somewhat moot—which makes the historical controversy all the more interesting.
9. The nature of harm. According to the “Harm Principle,” the law should prohibit persons only from harming others (as opposed to requiring them to help others). But is it clear what harm is: can ideas “harm” as well as fists and guns? What of violations of copyright? interference with privacy (and what kinds)? The Harm Principle is eloquently and briefly discussed in Chapters 2-4 of Joel Feinberg, Social Philosophy. Richard Taylor in his Freedom, Anarchy, and the Law distinguishes “natural” from “conventional” harm and gives reasons for restricting the concept of harm only to the former. The anthology Violence contains many important distinctions involved in this issue.
10. Paternalism. Under what conditions is one person entitled to act on behalf of another in making the second person's decisions for him and for his benefit? Aside from conditions of infancy, senility, insanity, is a person justified in using coercion against another person for that person's good? See Gerald Dworkin, “Paternalism,” and other interesting essays on the subject in Ronald Muson (ed.), Intervention and Reflection.172 See also Joel Feinberg, “Legal Moralism and Free-floating Evils,” and for special applications, John Hospers, “Libertarianism and Legal Paternalism:”
11. Freedom and expression. Mill's arguments for virtually unlimited freedom of expression were stated in his classic work On Liberty (applying to freedom of speech, press, and peaceable assembly), though many commentators have alleged that in defending liberty Mill was sometimes deserting the officially utilitarian basis of his arguments. Excellent essays commenting on Mill's On Liberty are contained in Peter Radcliff (ed.), The Limits of Liberty. Mill's position was savagely attacked in James Fitzjames Stephen's Liberty, Equality, Fraternity (1888). In the twentieth century Mill's position has been championed again by numerous writers, conspicuously by H.L.A. Hart in Law, Liberty, and Morality, and as savagely attacked by Patrick Devlin in The Enforcement of Morals. All these books are available in convenient paperback editions; in addition, a recent anthology of readings on this topic is Freedom of Expression, edited by Fred Berger.
12. Religion and ethics. The idea that religious belief is necessary to provide a basis for ethics has suffered considerable decline in the twentieth century. A good statement (among others) of Aquinas' reasoning on the subject is contained in a small recent book, Aquinas and Natural Law by D.J. O'Connor. Emil Brunner's The Divine Imperative is a twentieth-century defense of religiously based ethics. Most theological treatments of the issue devote most of the space to metaphysics and theology, and the defense of the view that ethics rests on religious belief is simply a footnote to a larger system.
On the other side of the issue, Kai Nielsen's little book Ethics without God presents a readable and well argued case for the non-necessity, indeed the irrelevance, or religious belief as a foundation for ethics. A neglected but quite memorable book on morality without religion is Richard Robinson's An Atheist's Values.
There is no lack of books attempting to cover the entire field of ethics (occasionally including social philosophy as well). Only a few of them, broken down into convenient categories, can be listed here.
1. Histories of ethics. Henry Sidgwick's brief nineteenth-century classic: A History of Ethics, is still constantly referred to by students of the subject. Among the recent histories of ethics, Alasdair MacIntyre's A Short History of Ethics is both readable and highly informative.
2. Expository textbooks on ethics. Since most textbooks these days are anthologies, the nuober of books surveying the field of ethics entirely in the author's own language is limited. The longer ones (over 400 pages) include A.K. Bierman's Life and Morals and John Hosper's Human Conduct. The best known of the shorter introductions is W.K. Frankena's Ethics; also current are Principles of Ethics by Paul Taylor,Introductory Ethics by Fred Feldman,Moral Decision by Stephen D. Ross, and Ethics: Theory and Practice by Jacques Thiroux.
3. Anthologies of ethics: classical and contemporary. The field abounds with anthologies of readings in ethics, which include selections from Plato to the present. An extremely meaty and well organized selection is William Frankena and John Granrose (eds.), Introductory Readings in Ethics. Other well chosen selections occur in Moral Philosophy: Classic Texts and Contemporary Problems, (eds.) Joel Feinberg and Henry West;Ethical Theories, (ed.) A.I. Melden;Morals and Values, (ed.) Marcus Singer;An Introduction to Ethics, Robert Dewey, and Robert Hurlbutt (eds.);Ethics: Selections from Classical and Contemporary Writers, (ed.) Oliver A. Johnson; Readings in Moral Philosophy, (ed.) Andrew Oldenquist.
4. Anthologies of ethics: Contemporary. The largest and most comprehensive selection of readings on all aspects of ethics, taken entirely from the twentieth century, is Wilfrid Sellars and John Hospers (eds.), Readings in Ethical Theory. Other selections of contemporary readings include Kenneth Pahel and Marvin Schiller (eds.), Readings in Contemporary Ethical Theory, and Paul Taylor (ed.), Problems of Moral Philosophy,230 and Paul Taylor (ed.), Problems of Moral Philosophy. Treating almost exclusively problems of contemporary normative ethics, at a lower level of difficulty for the untrained reader, is James Rachels (ed.), Understanding Moral Philosophy.
Professor John Hosper's preceding bibliographical essay on the tradition of modern Anglo-American ethical analysis (“The Literature of Ethics in the Twentieth Century”) evidences the enormous range of problematic issues that fall under the ethician's scrutiny: the nature of moral good, egoism, utilitarianism, deontology, rights, justice, freedom, moral responsibility, and a host of “special topics,” such as medical ethics and the morality of war.
The following summaries on ethical analysis seek to supplement, from diverse viewpoints, themes outlined by Professor Hospers. The opening four summaries help clarify the notions of human rights and virtue. Beginning with the Green and Wikler summary, we turn to a series of studies on the crucial moral concept of “personal identity,” especially in John Locke's philosophy. Other ethical issues studied are human freedom and determinism, egoism, relativism, utilitarianism, and property rights.
“Some Recent Work in Human Rights Theory.” American Philosophical Quarterly 17(April 1980):103–115.
This is basically a comprehensive survey of systematic discussions of human rights since about 1950. Machan divides human rights theories into essentially two groups, noncognitivists—those who deny that anyone knows that there are human rights and maintain, roughly, that by “human rights” we mean some general social preferences or presuppositions of moral discourse—and cognitivists or natural rights theorists—those who hold that human rights are principles we know to be binding on persons in a social context or basic conditions of a good human community based on human nature. The noncognitivists include Margarette Macdonald, Joel Fienberg, A. I. Melden, Wm. T. Blackstone, et al., while the cognitivists include Ayn Rand, Eric Mack, Alan Gewirth, Ronald Dworkin, Robert Nozick (uneasily), et al.
After analyzing these theorists' positions Machan offers a list of problems still left with human rights theories. Noncognitivists all tend to fall prey to relativism and arbitrariness, while cognitivist or natural rights theorists have failed thus far to appreciate the enormous depth of philosophical work need to justify their case. Nevertheless, there are better and worse tries thus far. While Rand's objectivist theory of human rights has depth, it lacks the requisite detail to make the argument succeed. Mack's argument, in turn, is far more meticulous but it omits the treatment of various epistemological and metaphysical issues most critics of natural rights theories rightly demand. Nozick's “argument from best explanation” approach is hardly a complete defense of individual rights and encounters problems by combining Hobbesian and more humanistic conceptions of human nature. Gewirth fails to treat objections to his crucial assumptions about the purposiveness of human action. He also treats as if they had the same political standing both the right to freedom, which is general, and the right to the enabling conditions (making actions possible for children), which is special. Dworkin promises objective standards of his political principles but does not deliver them, thus resting his seriousness about rights on little more than quicksand. Still, all these more recent discussions seriously advance the discussion of human rights theory, beyond its earlier confusions and neglect.
“Rights and Roles” in Right and Wrong (Cambridge, Massachusetts: Harvard University Press, 1978, pp. 167–197).
If doing good and avoiding harm constitutes our basic obligations to others, how can there be special professional obligations to certain people that necessarily involve harming some other people? Why, for example, should a doctor be loyal to his patients, at the expense of others who might need scarce medical supplies more urgently? Or why should a lawyer be loyal to his clients at the expense of an adversary whom the lawyer knows to be innocent?
The possible answer to these dilemmas might be found by treating professional obligations as analogous to friendship obligations. Just as friends ought to be freely chosen in a context that initially involves no harm to others, so also clients and patients should be freely chosen in a context that initially involves no harm to others. But as it is not the role of a friend to see that the resources that he transfers to friends are most efficiently or fairly distributed in the first place, so it is not the role of a doctor or lawyer to see that the help that he should provide is distributed most efficiently or fairly. The tasks of seeing that all people are treated fairly and that resources used efficiently are more properly handled by all the people in their other role as citizens. The doctor or lawyer then, may only be obligated to help his client, since the legal/medical context in which he works has been co-created by other citizens. This conclusion is no more surprising than our common sense conviction that we are obligated to help our friends first, since we chose them freely in a social/legal context that has been cocreated by other citizens.
The superiority of this approach can be seen by examining its chief rivals. To expect doctors to select patients by some standard of “efficiency” alone could entail either drafting (or coercing) persons to become doctors, or coercing doctors into selecting only certain patients. Or, on another alternative, if lawyers were forbidden to select clients that they couldn't lie for or (incidentally) harm others, then some clients might never receive fair, legal representation under an adversary system. Thus, the right of doctors and lawyers to select their own patients/clients and to identify themselves with their clients interests (as a friend would do), is a role-specific freedom that it would be wrong to interfere with for reasons of either greater “efficiency” or “fairness.”
“Liberty, Virtue, and Self-Development: A Eudaimonistic Perspective.” Paper delivered at the Liberty Fund-Reason Foundation sponsored Symposium on Virtue and Political Liberty; Santa Barbara, California, April 24–27, 1980.
By founding political philosophy on a “rights-primitive” basis (where rights are axiomatic), rather than on a “responsibilities-primitive” basis, many modern political philosophers have severed the connection between liberty and virtue.
Basically, there are two fallacies at the root of the “rights primitive” approach. One fallacy is the notion that people ought to be given things throughout their lifetime, whether or not they have earned or deserved them through the cultivation of their own personal virtue. This is a fallacy because it assumes that desert doesn't have to be earned once a person has left the dependent stage of childhood.
The second fallacy is the notion that all the desirable benefits of life can be conferred by other people. This fallaciously assumes that a person's needs are basically “economic” in nature, and continue to be so, even after a person has left the economically dependent stage of adolescence.
A eudaimonistic theory avoids these fallacies by reconnecting moral and political philosophy via a more classical notion of the virtues. The virtue of generosity, for example, can be considered an indispensable precondition for moral maturity. And, the development of this virtue presupposes that an individual be allowed to benefit others who have earned the right to appreciate what he offers. There is no injustice, then, in allowing a Stravinsky to develop his musical talents in a way that only a few others might appreciate. Nor is there injustice in demanding that a Stravinsky be given the use of an orchestra when he has earned the right to express his talents in this way. Only by tying political desert to a moral merit that we earn by our own efforts, can moral and political philosophy be properly reunited.
“On Improving Mankind by Political Means.” Paper delivered at the Liberty Fund-Reason Foundation sponsored Symposium on Virtue and Political Liberty; Santa Barbara, California, April 24–27, 1980.
Should legal sanctions be used not merely to restrain our vices, but to increase our virtue as well? According to Aristotle, the legislator can be a moral educator in the sense of promoting virtue. But Aristotle's position has been challenged by thinkers as diverse as John Milton and Albert J. Nock, and his position becomes even more questionable once we distinguish between “substantive rules” (authority-sponsored rules of distributive justice) and “ceremonial rules” (spontaneously generated rules that embody respect for persons).
Nock's argument against substantive rules (rules that carry penalties) was that forced virtue is no real virtue; i.e., that the demand behind such a practice of “moral education” contains a contradiction. A man forced to give away his property is hardly virtuous when he does so.
But neither of these thinkers gave much attention to how “ceremonial rules” might be capable of teaching respect and therefore, virtue. Ceremonial rules include such practices as asking for permission to use something that another is using, saying “please” and “thank you” and generally respecting others as free and rational beings. The key to these rules in the process of teaching virtue is that their rationale is capable of being understood by the people expected to follow them. Even a child can understand that asking permission to use something is a way of discovering what belongs to him and what does not. But a child who is only given substantive distributive rules (with their attached penalty) cannot truly understand he is being asked to give up something that he wants. If he obeys the rule, then it can only be because he experiences it as a command to which penalties are attached for noncompliance. This type of “moral education” might be better compared to the type of training that a soldier gets in wartime. The resulting actions may be “efficient,” but they are performed mechanically and without understanding. Only a moral education that increases both understanding and an appreciation for others is the education that can teach virtue. And these ceremonial rules are not the type of rules that legislators seek to impart by the use of “the political (coercive) means”—i.e., by commands and penalties.
“Alienation, Sociality, and the Division of Labor: Contradictions in Marx's Ideal of ‘Social Man’.” Ethics 89(January 1979):82–94
The author claims that Marx's view of estrangment contains a basic contradiction. On the one hand, Marx believed communism would abolish such alienation and that man's sociality would become truly actualized. On the other hand, Marx believed that the source of man's social nature was the division of labor, which was also the source of estrangment; if communism abolished estrangment by abolishing the division of labor it would have to abolish man's social nature!
The basic contradiction of capitalism, according to Marx, was that capitalism was at once the most social and cooperative form of production and yet its organizing principle was one of egotism and self-interest, as individuals' interactions with one another is based on divided, opposed, and separated interests. Capitalism magnifies and brings to a head estrangment which began once people took on different occupations and tasks, and thus different ideas, outlooks, desires, and interests. The division of labor continues as physical labor is separated from mental labor, the individual is separated from the state, and people become one-sided and stunted. This increasing division of labor is directly related to the introduction and subsequent blossoming of private property, since the latter allows and hastens people's separation from one another and is an essential ingredient in the broad and complex exchange relations that contribute to continued estrangment.
Given this analysis, it is easy to see why Marx thought communism's abolition of estrangment would mean the abolition of the division of labor and private property. People will then produce what, when, and where they want and would do so only on the basis of human needs, as opposed to on the basis of some narrow conception of interests.
The problem with this is that Marx's materialism entails that people come together only because of material factors. Marx explicitly states this in The German Ideology: people come together, not on the basis of culture, moral ideals, political institutions, religion, etc., but because of the necessity of cooperation which is derived from the need to maintain and reproduce themselves as individuals and as a species. Cooperation means social relationships, which means a division of labor. Once these social relationships take on a peculiarly human form where the genesis of productive skills are not just biological, then needs expand, specialization begins, the division of labor becomes more complex, and interdependence widens. As the sphere of interdependence involves more and more people, man becomes more social. The abolition of the division of labor in communism would thus mean people would not need to cooperate or be social. Given Marx's analysis, a purely voluntary (no coercion of necessity) association of producers is a contradiction. If everyone's capacities are all fully developed, and scarcity is abolished, men would not need each other and the automatization of individuals would begin.
There are only two ways out of this dilemma: either admit that Marx was not serious that communism would actualize social man, or admit that Marx would have to concede there are other bases for sociality then material ones. Both of these alternatives are unattractive. If we take the former, what is so advantageous about self-sufficient individuals with all their powers developed so that they do not need anyone? Perhaps a society of such people would end estrangment, but it would do so at the cost of promoting pure egoism. The second alternative would mean that cultural and/or social needs would bring communist persons together. Cultural needs would be needs to learn from one another, to expand one's potential by associating with others who have similar needs. But it seems a pious hope that such needs by themselves will achieve the harmony Marx hoped for, and it violates the spirit of Marx's materialism to make such an idealist assumption. Social needs would be the need for the company approval, affection, etc., of others—in general, a broad gregarious tendency. But Marxist assumptions require that such a general tendency be given specific content by historical details in order to vindicate the claim that such gregarious tendencies will bring people together harmoniously. However, the absence of thing-oriented and physical needs make it unlikely that it could be given such content. Marx's theory requires that social relationships in the absence of any material basis would not be very complex, which implies that communist society would become very primitive. So no matter which turn we take, Marx's dilemma remains insoluble.
“Brain Death and Personal Identity.” Philosophy and Public Affairs 9, no.2(1980):105–133.
Can we justify the tendency to define death in terms of brain death instead of the cessation of heart and lung function? Arguments based upon biological and moral considerations have been advanced, but only an ontological argument—based on a defensible concept of personal identity—directly engages the issue of how we define death.
Biological arguments which define death as brain death emphasize the crucial role of the brain stem in regulating such life-functions as the heart and lung. Thus, when the whole brain dies, the other functions soon stop functioning as a system. But it is only a current technological limitation that the brain stem can't be replaced by an artificial aid which performs its regulating function. Hence the continual functioning of the lower brain stem is not essential to the systematic functioning of the organism.
Moral arguments which define death as brain death emphasize the crucial role of higher brain functioning in making consciousnes possible and thereby giving value to human life. Even if the body could survive death of the neocortex for several months or years, this type of life would have no more value for human life than preserving an appendix in a bottle of formaldehyde. It is therefore seen as appropriate to begin “death-behavior” (mourning, turning off life-supporting equipment, removing vital organs for transplant) when the neocortex has ceased functioning. However, defining death is not the same task as deciding when it is appropriate to begin death-behavior, since meaning is not determined by the behavior that it may give rise to. “We have only to realize that the moment of pulling the plug need not be the moment of death to see that defining death is a different job from deciding when it is best to remove the life-support systems.”
Once an ontological distinction is made between the patient and his identity, we can grant that the individual (say, Jones) may cease to exist even if “the patient” remains alive. Then, if the loss of capacity for mental activity occurs at brain death, and psychological continuity is a necessary condition for personal identity, Jones will cease to exist with brain death, even if “the patient” continues to live. Psychological continuity is a necessary condition for personal identity because it provides a causal tie between person-stages which the spatio-temporal continuity of brain tissue or other body structures can not provide. Thus, a given person ceases to exist with the destruction of whatever neurological processes underlie that person's psychological continuity and connectedness. It is not simply that a mindless future life would be of no value to us, but that such a life could not be ours. No moral conclusions follow from this definition of death, but it is unlikely that the definition could be used to justify euthanasia for those whose lives had merely ceased to have value.
“Locke's Theory of Personal Identity.” Philosophy 54(April 1979):173–185.
It is widely held that John Locke viewed personal identity primarily as a function of consciousness and memory. Objections have been raised against this reputed position by numerous modern philosophers including Anthony Flew, J.L. Mackie, and Bernard Williams. Some contend, for example, that Locke's theory must be supplemented by the notion of bodily continuity either because memory alone is not sufficient or because the concept of memory is itself dependent on bodily conditions. Others claim that, since memory is faulty, forgetfulness according to Locke's theory would entail a partial loss of personal identity.
Oddly enough, Locke treats these and other standard objections to memory in the Essay and dismisses them as not bothersome. For Prof. Helm, those perennial criticisms ignore what was, in Locke's mind, the relationship between consciousness and memory in the make-up of personal identity. In Locke's theory, Helm asserts, identity consists of the “spatiotemporal continuity of consciousness.” The philosopher himself explicitly states: “...Since consciousness always accompanies thinking and it is that that makes everyone to be what he calls self...in this alone consists personal identity, i.e. the sameness of a rational being.”
Within this framework, memory gives evidence of the spatio-temporal continuity of consciousness, but it itself is not the essence of one's identity. The role of consciousness in identity is thus logical and metaphysical, while memory plays an epistemic role, as a test for the continuity of consciousness.
Recognizing the fallibility of memory, Locke could declare: “Ideas in the mind quickly fade and often vanish quite out of the understanding, leaving no more footsteps or remaining characters of themselves than shadows do flying over fields of corn...” Nonetheless, there are three senses (requiring careful delineation) in which Locke regards memory as necessary to personal identity.
First of all, while it is not logically necessary that one remember certain events of the past, remembering (especially of oneself) remains an essential component of consciousness. Secondly, in a forensic sense, memory is required for an individual to have evidence that he is identical with someone in the past and thus responsible (in his own and others' eyes) for certain past actions. Augmented or ideal memory (as on the day of the Last Judgement) would allow for perfect justice in assessing actions. Barring divine inspiration, however, humans must content themselves with provisional justice in this world. Thirdly, a logically sufficient condition for identity is that the individual consciousness can recall some past action, not that it does recall it. This point is made in the chapter “On Retention” in the Essay.
Thus, the relation between memory, consciousness, and personhood assumes a much more complicated character in Locke than is normally assumed. The basic error of many interpreters is to hold that Locke posited perfect memory as a condition for identity and responsibility. More complex objections usually rest upon this fundamental misapprehension. These evaporate, however, with an understanding of Locke's more subtle position.
“Moral Science and the Concept of Persons in Locke.” Philosophical Review 89(January 1980):24–45.
Locke's early writings emphasize the idea that general moral truths rest on universal truths about human nature. In his later writings, Locke denies that universal truths such as “all men are rational” can be both certain and instructive; only mere verbal truths can be certain. Locke further argues that for us to be certain of a universal proposition which is not true by definition we would need either direct experience of all the members of a kind (which is possible only if the kind is restricted) or experience which assures us of things beyond our experience.
Despite this, Locke asserts that the concept of “moral man” may help us ground universal truths. He makes this argument by way of an analogy with mathematical knowledge which he thought capable of universal certain truths. Locke held the conceptualist view that our own ideas are the archetypes and essences of the mathematical sort. Since mathematics is concerned with conventional kinds (called modes) not natural kinds (called substance kinds) we can know all there is to know about the essences of mathematics, since the real and nominal essence are the same. Mathematical propositions are universal because they are propositions about ideas (not things), and certain because they are merely about the relation of ideas. Furthermore, mathematical propositions are instructive as well because their predicate ideas are not contained in the subject and mathematical propositions contain information about real particulars. Locke thought ethics would provide truths similar to those found in mathematics. Moral ideas give us adequate notions of moral kinds, and moral essences have necessary connections with other real essences. Still, in order for there to be moral knowledge of the kind Locke was seeking, an additional premise was needed: that it be humanly possible to demonstrate the necessary connections among moral ideas. This premise was one Locke later grew to be skeptical of; his separation of the concept “moral man” from “human” shows where his thought was heading. The idea was that “moral man” would not apply to all biological humans but to any creature which was a corporeal, rational being.
In the fourth book of the Essay, Locke's attention shifted to that of “rational selves.” He considers this a “clear” idea, i.e. a representation of a real essence (which refers to particulars) in the mode kind. The concept of rational selves is even more specialized than that of moral man since it does not involve reference to corporeality.
The author concludes by saying that Locke has a genuine insight here: that the notion of a person is relevant to ethics, that it is different from the natural kinds concept of humans, and that the extension of this concept does not depend on the inner structure of individuals. And it also goes without saying that Locke's concept of a person was crucial to his often discussed concept of personal identity.
“Locke on Suicide.” Political Theory 8(May 1980): 69–182.
Locke proclaimed in the Second Treatise an absolute prohibition against suicide on the grounds that we are God's property and are thus made to last during His pleasure. However, other aspects of Locke's thought don't fit with this absolute, theologically based doctrine. In The Essay on Human Understanding, we are told that nature intends the pleasure and preservation of the whole, not each of its constituent parts. God designed pain for the preservation of the whole. This implies that suicide for those with unceasing pain cannot be condemned by Locke as it does not impede God's plan for the good of the whole.
Furthermore, within the Treatise itself there are doubts concerning how seriously one should take the absolute prohibition on suicide. Locke justifies the taking of slaves in war partly on the grounds that, should the victor's reign become too harsh, the slave can resist and thus obtain the death he desires. And Locke suggests that the slave is a moral agent in doing so, since he acts voluntarily. That a voluntary action designed to produce one's death is not denigrated but rather is used as a partial justification of slavery points to Locke's less than absolute prohibition of suicide.
Even more direct evidence is provided by Locke's declaration that in the state of nature man lacks liberty to destroy himself or any creature in his possession except where justified by some “nobler use.” Why then the insistence that the “law of nature” absolutely prohibits suicide? The author offers three reasons. First, Locke wanted to bask in the reflection of conventional (theologically grounded) opinions. He believed theological arguments were important for the vast majority of mankind that must believe rather than know. Furthermore, parallel secular and nonsecular argument would help the reader become accustomed to the fact that God's demands are not distinct from those stemming from one's desires. Secondly, the prohibition of suicide furthers Locke's argument in the following way: since all men are equal as God's workmanship, prohibition on suicide provides Locke with a nonutilitarian argument for refraining from taking others' lives. Thirdly, the denial of a man's arbitrary right to take his own life provides support for the right to revolution. Citizens can resist arbitrary power because no “body can transfer to another body more power than it has itself” and nobody has absolute power over oneself. The right to resist arbitrary power is strengthened by our status as God's property. Of course, the fact that no one has arbitrary power over one's own person or property under the law of nature does not rule out suicide under all circumstances.
“Freedom, Determinism and Character.” Mind 89(January 1980):106–13.
The author argues that the sense in which actions flow from character traits points to a problem for those “incompatibilists” who think that there is an incompatibility between determinism and freedom. While some compatibilists have made arguments similar to Sankowski, he differs from them since he does not believe that actions must be causally determined by character in the sense of a universal law (“if p, then q”) in order for free action to flow from character. A person's character is such that many, but not all, acts freely follow from it.
If the incompatibilists were right, as we move from acts which are not predictable in practice to acts which are more or less predictable, we should find that we consider such acts or agents less free. In fact, we do nothing of the kind. The incompatibilists may reply that the only determination at issue is that of universal causal law, so that predictability which occurs as a result of character traits is irrelevant. But this reply is wrong: if necessity is held to cancel out freedom, what difference should it make whether the necessity is of the kind that occurs in universal causal laws or in looser claims about character traits? The incompatibilists' position is based on the alleged fact that factors outside a person's control generate a necessity which cancels out freedom. If this is the position, then there is a slippery slope regarding the type of necessity about which incompatibilists worry. The incompatibilist reply fails and the argument retains its force.
“From Catch-22 to Slaughterhouse V: The Decline of the Political Mode.” The South Atlantic Quarterly 78 (Winter 1979):17–33.
Imagistic literature does not necessarily reveal the cultural milieu which it describes, however such literature can be significant in understanding an era when considered as perceptions of the society it represents. Hartshorne presents two examples to defend his argument: Catch-22 by Joseph Heller and Slaughterhouse V by Kurt Vonnegut. Both books offer a fictitious view of the world and its problems together with a series of rules for coping with these problems.
Heller's Catch-22 creates a symbolic situation of an individual, Captain John Yossarian, in conflict with his corrupt military superiors during WW II. Yossarian's search for his own personal salvation resulted in his desertion from the army. Rising above the claims of a particular group of men unworthy of respect or obedience, he found his freedom.
Vonnegut's Slaughterhouse V could be considered more of a religious work than Heller's political fable. The protagonist, Billy Pilgrim, is coming to terms with the dichotomy between man and whatever it is that is responsible for the organization of the universe. Billy Pilgrim is kidnapped by aliens from outer space and taken to the planet Tralfamadore. The resolution he arrives at is to accept his role as a pawn of forces he cannot control. Billy hopes only to accept life's circumstances and to love and be kind to others.
Hartshorne suggests these books are reflective of the era in which they were written—Slaughterhouse V in 1969, Catch-22 in 1961.
Yossarian of Catch-22 was striving for a particular goal of personal freedom. He was rebelling against a specific human conspiracy, not against the entire bureaucratic system that caught him up in that conspiracy. The events in the novel move in a particular direction, and ends on an optimistic note when Yossarian deserts his company. For Yossarian, survival constitutes a victory.
If Billy Pilgrim is fighting, it is against the whole universe. There is no contest in Slaughterhouse V because there is no possibility of victory. “There are no bad guys for him to defeat, except perhaps God.” For Billy, survival is not crucial because death is inevitable. All one can hope for are a few good moments to cherish in life.
The tone of these novels echoes the political culture of the 1960s. The decade began with optimism and the firm belief that change is possible with the proper strategies. As the decade progressed, however, a feeling of “we have struggled, and we have failed” became pervasive. The theoretical orientation of the protest movement changed from vigorous reform to violent expression. “It is not the specific evil to which we must address ourselves, it is the evil of the whole system—which has produced evil.”
Catch-22 portrays the mood of the early 1960s with an attitude that effective resistance by the individual is possible, if we are careful to outsmart the authorities. Joseph Heller offers us instruction in how to achieve solutions to our problems.
Slaughterhouse V exemplifies the decay of reformist hopes in the late 1960s. Kurt Vonnegut can only offer us perspectives on how we may learn to live tolerably in a world we cannot change.
Both books make strong political statements capturing the moodswing of American culture during the 1960s. Although not set in the '60s: these novels deepen our understanding of that era.
“Asymmetrical Freedom.” Journal of Philosophy 77(March 1980):151–166.
How are freedom and value related? What is the connection between a person who freely controls his own actions and a person who merits moral claims of praise or blame? Many have claimed that being a free agent is necessary if one is to be a moral agent. This view gains its plausibility from the intuition that if a person's interests, values, and desires are psychologically determined by his or her heredity and environment, then a person cannot help doing what he does and thus can do nothing that is praiseworthy or blameworthy. The problem with this view is that a “free” person, one who is not determined to act as he does, does not seem to be a moral agent. For if I can act against my deepest interests, values, desires, it seems that, more likely, I'd be considered psychotic rather than a moral agent.
The problem is that our intuitions go both ways. When talking about freedom the “incompatibilists” seem right—an act is free only if it is psychologically not determined. When thinking about value, the “compatibilists” seem right: no moral agency is possible (no praise or blame) unless I am psychologically determined. How can we have both freedom and morality?
Wolf suggests that the philosophers have not gotten a firm grasp on our intuitions. It is with regard to blame (i.e., a person doing bad actions) that lack of determination is necessary for moral agency. In such cases where an act was not free (because, for example, the actor's childhood caused him not to try to stop compulsively stealing) the incompatibilists seem right. With regard to praise, however, the situation is revered. It is precisely because a person is psychologically determined (e.g., “I couldn't help but aid my friend”) that moves us to praise. If a person is generous only out of childhood habit, his actions may not be praiseworthy; whereas if he was taught to be generous as a child and as an adult freely decided it was a virtue to be generous, then his actions are probably praiseworthy. Thus, with regard to praise, the compatibilists are right. A person is free only if he is determined: a good act is one in which an agent could have done otherwise had there been good and sufficient reasons, but the person was in a situation where there were no other good and sufficient reasons. With regard to blame, the incompatibilists are right. A person can only be blamed for a bad action if he acts in the face of good and sufficient reasons to act otherwise in that situation.
This means that the freedom we want is the freedom to find the “True” and the “Good” so as to be determined by them. The problem of free will cannot, therefore, be understood in value-free terms. It is easy to see why many philosophers have supposed that being determined by the Good is not really being determined.
Finally, Wolf points out that on her view psychological determinism is false, since for a person to perform a blameworthy action he must not have been determined to do so.
“Freedom and Constraint by Norms” American Philosophical Quarterly 16(July 79)187–96.
The author clarifies Kant's claim that one is free insofar as one acts according to the dictates of norms and principles, as opposed to causes.
First, Brandon focuses on linguistic norms. What makes a linguistic performance correct or incorrect or an utterance appropriate or inappropriate are the human social practices of a community—such as the practice of criticizing the utterances of others for perceived failures to conform to the practices governing linguistic performances, or the practices involved in adjudicating disputes. The instances of a social practice are whatever the community takes them to be, as opposed to objective kinds wherein instances are what they are regardless of what any particular community says. Brandon does not deny that there can be appraisals which concern the objective features of the utterance; his point is that the norms which specify criterion of membership in a linguistic community are not objective. If truth of utterances were such a criterion, language would be impossible since it would presuppose infallibility.
The distinction between objective and social kinds throws light on the dispute between naturalists and nonnaturalists on the relation between objective facts and norms. Since social practices express norms, it might seem that the question is whether there is an objective difference between the social and objectively factual. Instead the author suggests that the difference between social and objective phenomenon is itself a social distinction. But what is the difference between treating some system as a set of social practices as opposed to objective processes? The difference is between translating something versus giving a casual explanation of it. The former involved setting up our set of social practices as an unexplained explainer and showing how the practice in question differs from ours vis-à-vis what one should do in a certain situation. This process assumes that the system in question contains similar norms of appropriateness and justification of its performances. Thus when we translate a stranger's performance we treat his performances as one of our own. Kant's original suggestion that the realm of Freedom differs from the realm of Nature now becomes: we treat someone as free if we treat him as one of us. That is, he is free if we see him as subject to the norms inherent in the social practices conformity, which is a criterion of membership in the community. We treat him as unfree insofar as we see him in terms of causes which constrain him.
The author broadens this notion by noting that most of the sentences uttered have heretofore never been uttered. One has not learned a language unless one can invent new sentences which are deemed appropriate by the community. This creative aspect of language is a paradigm of expressive freedom. Mastering a new language means one is capable not only of forming new descriptions and making new claims, but also of forming new beliefs, intentions and desires (sophisticated beliefs, intentions, desires require language) and thus of engaging in new actions. Thus it is only by virtue of being constrained by norms inherent in social practices that one is free to generate and understand new possibilities. This notion of expressive freedom shows that Hegel's criticism of Kant was correct: freedom is not just constraint by norms but constraint by certain types of norms (those which make expressive freedom flourish). Furthermore, exercising one's linguistic capacities creates new practices, which in turn makes new performances possible. This shows that Hegel was right in asserting that there is a dialetical relationship between the individual and the norms within which he operates. Brandon concludes by noting that social and political constraints are justified if they make possible expressive freedom which would otherwise not exist without the norms.
“Is Kant the Gray Eminence of Contemporary Ethical Theory?” Ethics 90(January 1980):218–238.
Recent ethical theory is governed by a paradigm, consisting of an ignoble picture of human beings and two basic principles. Human beings are depicted as appetitive animals, in pursuit of their desires. The two principles proclaim that human beings' occupation with what comes naturally or with what is in their interest to do is not an ethical concern; rather—and this is the second principle—ethics enters in when someone's self-seeking behavior interferes in some way with another's activity. Veatch argues that this is a contemporary paradigm by citing Rawls, Hare, Harman, Baier, Nozick, Frakena, and Gewirth.
Kant provides the rationale for this contemporary paradigm, not so much in the sense that he literally articulated these two principles, but because his philosophy has seemed to many to imply these principles. The first principle corresponds to Kant's view that our natural, unchosen inclinations and tendencies have only to do with the causes of, rather than the ethical reasons for, action, and that seeking reasons for acting is irrelevant to ethics. Thus, in this version of Kant, moral action can never be directed to any sort of natural end or goal because such ends and goals would only be causally determined, and thus nonrational and heteronomous (action) determined by some object rather than by the autonomous, choosing will itself. Furthermore, according to Kant all such heteronomous actions are done to promote one's happiness and are thus egoistic. The second principle in the contemporary paradigm corresponds to Kant's view that the criterion of moral and rational behavior is that one acts for reasons which are not simply self-centered but universalizable and hold for rational beings as such.
The problem with the Kantian paradigm is that the first principle eliminating all goal-related action from the moral realm renders the universalizability criterion empty. Why not, then, sanitize the egoistic nonmoral ends or goals so that they will be appropriate for universalizability? But to do this, one needs a theory of the objective good which modern moral theorists reject. Instead they resort to a kind of trick: why not universalize our ordinary desires and ends? For instance, Hare says that once we take our desires and purposes to be right, then we are committed to universalizability and hence ethics enters the picture. For Rawls it is the fact that we would choose to abide by his two principles of justice in the original position and this amounts to a commitment to universalize the primary goods of life. This likewise moves us from nonmorality to morality. But, counters Veatch, the fact that we would say that our desires are right or that we would choose certain principles under certain conditions is merely an interesting fact which cannot move us from the nonmoral to the moral. He suggests the way out of this impasse is via a theory of objective good like that of the Aristotelian/Thomistic tradition. If such a view of the good is correct, then there is no longer any opposition between nonmoral egoistic purposes and moral general principles. For if something is objectively good, it is moral and desirable both for myself as well as for everyone else.
“Plato, Popper and the Open Society: Reflections on Who May Have the Last Laugh.” The Journal of Libertarian Studies 3, No. 2(Summer 1979):159–172.
By abandoning Plato's concept of a fixed human nature, Karl Popper may have thrown out the very concept that is necessary to support the kind of open society he wants to defend against Plato. Popper believes that a normative ideal based on a fixed human nature will not suffice for two reasons, one “scientific,” another “existential.” Popper's “scientific” reason for opposing fixed natures is that genuine theories are unable to disclose the way the world really ought to be, since they can't even disclose the way the world really is. At best, science can only tell us the way that the world really is not.
Popper's “existential” reason is that a fixed human nature would impose a closure on possible human values that is incompatible with our responsibility to freely choose our own values. Popper would see moral freedom constrained by any theory that would limit the possible values that can be selected from “World 3” (Popper's term for the realm of the imaginable).
However, when Popper makes his own values selection from World 3—in the form of an “open society”—he must either succumb to an utter arbitrariness or have a legitimate way of going beyond the limits of science and “imposed” values. He would be succumbing to arbitrariness if he didn't tell us why his particular image of man is to be preferred to that of Plato or anyone else. But if all theories must either avoid telling us the way man really is and ought to be, or end up imposing some image of man upon all of us, how can Popper defend his preferred image of man as the preferable image? Seemingly, he can do so only by restoring the Platonic notion of nonscientific knowledge of the preferable, and of a fixed human nature on which it rests. By this means, an open society can be defended by basing it on a justified image of man.
“Karl Popper, Objective Knowledge, and the Third World.” Philosophia 9(December 1979):45–62.
Karl Popper's theory of the “third world” (the world of the imaginable) needs an amendment in order to account for its autonomous and objective character. Popper distinguishes three worlds: (1) the first world is that of physical objects; (2) the second world is that of the mental acts (or dispositions to behavior) that we direct towards physical or mental objects; (3) the third world is those mental objects themselves that form the content of our theories, arguments, books, libraries, etc.
Problems arise when Popper attempts to establish that the third world is both man-created and “objective,” or autonomous. The example of the natural numbers simply does not work, for while the numerals we choose to describe these natural numbers are man-created, the numbers themselves are discovered and are therefore not man-created. Similarly, the numerals that we use to describe natural numbers are as physical as chairs or tables, so we should consider them as elements belonging to “world number one.”
However, sentences and numerals are properly distinguished from meanings and numbers, the modified “third world” could include meanings and numbers as both nonphysical, discovered, and completely autonomous or objective. That would give the following components to our total world:
S: the linguistic form in which a number or theory is formulated;
C: the objective content expressed by S;
M: the mental act or state of X; and
P: (where relevant) the fact in the physical world to which S refers.
Now, the logical relationships and unintended consequences of “C” can be understood as both autonomous and not manmade, forming a bona fide “third world.”
“‘Because It is Mine’: A Critique of Egoism.” The Personalist 60(April 1979):186–200.
The egoist's moral position has represented a constant and formidable dilemma for those seeking a rational justification for a universal ethic, because the egoist bases his stand on a universal principle and he universalizes it. Nonetheless, he gives preference to his own claim whenever it comes in conflict with the claims of others.
Many ethical thinkers have sought to undermine the egoist stance with charges of subjectivism by arguing that justice, like all general and impersonal principles, must be applied universally, unless we can identify some relevant difference which distinguishes a situation or a person from a general category. The egoist, they claim, makes exceptions in his own favor whenever his interests dictate, ignoring the moral criterion of the relevant difference.
The egoist might reply that, besides universalizability, reason possesses another important dimension. Plato, for example, has defined reason as the faculty which makes possible the identification and achievement of the good of one's self as a whole. Justice or fairness thus becomes to ta heautou prattein—doing one's own work to attain one's total good. As for “relevant differences,” certainly the fact that the other person is not oneself constitutes a most relevant difference. The “Individualistic Argument” thus runs: “I ought to do x simply because I am I...and it is in my interest to do x.”
Prof. Beatty replies to the egoist that “I am I” is a statement that can be made by anyone. As such, it does not constitute a relevant difference. Furthermore, he asserts, “what makes a reason a good one is not its source—that would be authoritarianism—but rather its content, features of the reason on its own merits independently of its pedegree or the power or status of its advocate.” To say otherwise would be to fall into the very subjectivism which the egoist tries to avoid by his reasoning.
Nonetheless, one might salvage the “I am I” principle by another approach. Kant has pointed out that freedom is the presupposition of all moral claims. To assert that persons are free is to assume that there is a domain in which they may enjoy freedom from interference precisely because it is their own domain. Exceptions in favor of one's interests thus become licit because personal ownership is itself requisite for the autonomy presupposed by any moral principle.
However, Prof. Beatty asks, what principle allows the egoist to violate the theoretically inviolate autonomy of others to further his own ends? According to Beatty, the egoist might reply that morality itself requires the freedom to choose from among moral principles. If a moral agent were compelled by argument to choose one course of action over another, that would in effect render him unfree and would undermine the possibility of morality itself. The egoist thus would liberate himself to choose arbitrarily.
Closely examined, this form of freedom (or better capriciousness) appears intolerable. First of all, it leaves the egoist open to the depredations of others pursuing their own ends. In addition, by eliminating the requirement of justification, it subjugates the egoist to brute psychological or ideological compulsions which undermine the personal autonomy he defends.
Thus, Prof. Beatty concludes, the egoist who grounds his self-preferential claims on ownership and property must finally resort to the subjectivist “autonomy of morals.” In doing so, he must abandon all pretence that his position is intellectually grounded or indeed that it is a morality at all.
“Rule Utilitarianism, Rights, Obligations, and the Theory of Rational Behavior.” Theory and Decision 12(June 1980):115–133.
Harsanyi argues for the superiority of rule utilitarianism over act utilitarianism. First, rule utilitarianism achieves a better degree of spontaneous coordination (in the absence of communication) than act utilitarianism. An act utilitarian, concerned with the particular action that will maximize utility, will regard all other agents' strategies, even utilitarian ones, as given. A rule utilitarian, by contrast, will act on the assumption that all other utilitarian agents will follow the same moral rule. Given these differences, the act utilitarian will tend to act non-cooperatively in situations where a number of people taking identical actions is required to produce the action desired (e.g., a number of people must come out and vote for a desirable measure). That is, the act utilitarian will not perform the action in question when he is reasonably sure that his action will not make a difference in producing the desired outcome, which means the odds are that the act utilitarian won't perform the action. For the act utilitarian will reason that if others perform the action his contribution isn't necessary; if others don't perform the action, why should he? A rule utilitarian, on the other hand, will tend to perform the action since it is part of following the rule (e.g. voting) that maximizes utility.
Second, the author argues that rule utilitarianism is superior because it can make a rational commitment. Given the act utilitarian's concern with acts, he will not make a firm commitment in the beginning to follow the same strategy throughout the “game” but will decide on each move separately; whereas the rule utilitarian will follow the strategy he began with. It is clear that the rule utilitarian strategy of being committed in advance is more effective in maximizing utility.
Third, the author claims that the act utilitarian cannot respect individual rights and obligations since he will violate such constraints where the act of doing so produces a greater payoff. Rule utilitarians, on the other hand, have little trouble respecting such rights and obligations, as long as the rule observing such constraints produces more utility than disregarding them. Furthermore, the author claims that only rule utilitarianism can provide a firm foundation for such constraints. Rights and obligations provide the following advantages for society: they make it easier to form relatively specific expectations about people's behavior; they increase incentives to engage in desirable behavior; and they give rise to a beneficial division of labor.
The fourth advantage of rule utilitarianism is its role in solving the “voter paradox.” The voter paradox is that although, on utilitarian grounds, it seems irrational to vote, yet voters do not think this to be so. Voting is not irrational if one looks at it from the point of view of a rational commitment to a comprehensive strategy.
“Two Justifications of Property.” American Philosophical Quarterly 17(January 1980):53–59.
The author offers a schema (or set of formal conditions) for justifying property rights. Property rules perform two basic functions. First, they assign to individuals rights over things; such rights forbid other persons to interfere with the owner's use of these things. Rights over things can be called rights of title. Second, property rules create mechanisms for acquisition, transfer, and alienation of rights. These rules can be called “criteria of title” and answer the question “Who owns what?” The rights actually assigned and the criteria used to assign them, taken together, give the content of a set of property rules.
The criteria taken as a whole must be consistent, determinate, and complete. Consistency means that the criteria of title must not allow an individual to own something and not own it at the same time and in the same way. Need is an example of an inconsistent criteria of title: many parties may need the same scarce thing at the same time. Determinacy requires that it be possible in principle to unequivocally determine who owns what. Again, need is an example of an indeterminate criterion. Persons need food but it can be fulfilled in a variety of ways, so that the fact that someone needs food cannot determine what he should own. Completeness means there must be some procedure by which individuals or groups can come to own something.
With the requirements for the schema now set out, the author discusses two schemas. The first is teleological—consequentialist as allegedly exemplified by Locke. Here, the criteria of title are labor (when expended upon what is previously unowned) exchange, gift and bequest. Taken together these criteria are consistent, determinante, and complete. The rights of title in Locke are restricted only by the standard of innocent use, i.e., property may not be used in ways prohibited by natural law and the principle of harm (i.e., through wasting or spoiling which harms others through the resulting scarcity). Locke's set of property rules are justified because it can be demonstrated that no other possible set of property rules an produce as much utility within the state of nature. Clearly, the labor criterion produces utility for the laborer and it does not affect anyone's utility because of the restriction on appropriation (“enough and as good left for other”). Gift and bequest clearly raise the recipients' utility; though some expectations will be frustrated because of failure to receive what they anticipated, this will be a trivial source of disutility where “enough and as good left for others” is in operation. Exchange benefits all parties if the exchanges are voluntary.
The only criteria of title which it might be thought could produce better results than Locke's is that of want rather than labor. But want fails the consistency and determinacy conditions.
To complete the justification it must also be shown that the rights of title also maximize utility. This is easily done: the criteria give the owners maximum freedom to do as they wish, limited only by the non-harm restriction which, in effect, means that others' utility is not seriously dimished.
The author also discusses a deontological schema as set out (in part) by Alan Donagan's The Theory of Morality. This schema holds that the criteria and rights of title are justified if they are consistent with the deontological principles, in this case, the principle of respecting every human being as a rational being. It turns out that these criteria and rights of title are virtually identical with Locke's. The criteria and rights of title treat the agent as autonomous (as defined by Donagan), that is, as having the right, limited only by the moral law, of deciding what one's good is and how to pursue it. For the only restrictions on right and criteria of title as discussed by Donagan are those he thinks are required by the moral law (e.g., not to harm others). Of course, both the deontological and teleological criteria depend upon a crucial premise: that there is enough and as good left for others. Where this premise is falsified, the schema may provide a valid yet unsound justification.
Legal theory and policy intimately intertwine with ethical philosophy through such shared concerns as rights, value judgments, and justice. Legal philosopher, Lon L. Fuller, who is Literature of Liberty's cover subject for this issue, was one of a handful of the critics of legal realism during the 1930s and 1940s who sought secure and nonarbitrary moral foundations for traditional jurisprudence. Faced with the lawless brutality of totalitarian ideology during this period, Fuller championed a form of natural law as an active process of rational and ethical inquiry to counterbalance the dominant legal realism. Legal realism tended to identify law with the de facto edicts of any government. Fuller and other defenders of a free and reasoned society faced a “crisis in jurisprudence” in trying to establish a rule of law that constituted a logically and ethically defensible alternative to Realpolitik. (See Edward A. Purcell, Jr. The Crisis of Democratic Theory: Scientific Naturalism and the Problem of Value. Lexington: The University Press of Kentucky, 1973, chapter 9).
The following summaries also examine the complex interplay of ethical and legal issues. Such issues include natural law and human rights, privacy in its legal and moral dimensions. The concluding summaries examine the social, moral, and legal aspects of criminal justice, contract law, Hobbesian law and authority, civil disobedience, and the proper legal discourse appropriate between autonomous lawyers and clients. Throughout these summaries run interconnected themes of rights, autonomy, and legal authority.
“Rights and The United States Constitution: The Declension from Natural Law to Legal Positivism.” Georgia Law Review 13(Summer 1979):1447–1500.
The Founders of the U.S. Constitution based their political philosophy on natural law in four ways. Natural law provided moral standards for virtuous men to govern their conduct; next, it defined ethical standards to guide positive law; third, natural law set limits to power, beyond which the government invited revolution; fourth, and most importantly, natural law was the source of natural rights which were antecedent and superior to government. Natural rights checked power and the sole justification of governmental power was to protect natural rights. From its very beginning, judicial review was marked by adherence to the principle that there were natural rights beyond and limiting statutory law.
The slavery issue and the Thirteenth Amendment confirmed this interpretation of the Constitution as a protector of natural rights. By the 1840s, slavery advocates, such as George Fitzhugh, no longer maintained that the Constitution upheld natural rights and that therefore it was not surprising that the Constitution upheld slavery. When the Thirteenth Amendment outlawed slavery, the Constitution was again interpreted as an expression of natural law/natural rights.
With the twentieth century, however, legislature and courts have discovered new economic and social “rights” to supplement the older natural rights to life, liberty, and property. Viera views such a new mix of rights as incoherent since fulfilling economic rights requires disrupting the spontaneous order of the market and personal rights. Legal positivism has accompanied the emergence of these new “rights.” Legal positivism, however, is inconsistent with American Constitutionalism, since positivism places the basis of the law in the will of the sovereign, while constitutionalism rejects absolute sovereignty by its doctrine of pre-governmental natural rights inhering in individuals.
Two doctrines reveal the courts' current committment to positivism. First, there is the “rational basis” test for seeing if governmental power interferes with constitutional liberty. This test asserts that governmental power infringing on a liberty is constitutional unless no conceivable set of facts would justify the power.
The positivist, Justice Holmes, in his dissent in Lochner vs New York (1905), argues that a law could not be said to interfere with liberty unless a rational and fair man would admit that the statute would infringe on fundamental principles as they have been understood by our tradition and law. Holmes maintained that the Constitution did not embody a timeless, unchangeable economic theory.
The second doctrine which has undermined legal and natural rights is the “balancing” doctrine. Here “incidental” infringements on a person's fundamental liberty are held to be permissible if they fulfill a legitimate and compelling governmental aim, if the government can show its use of power is directly related to that aim, and if the least restrictive means is taken to achieve that aim. This belief that the government can infringe on fundamental liberties for a compelling aim renders natural rights empty. Also, since the government is supposedly acting for its citizens, this doctrine implies that the compelling interests of some citizens for whom its acts are such that they require violating other citizens' liberty. Again, this view contradicts the theory of the Constitution.
“The Basis and Content of Human Rights,” Georgia Law Review 13, No. 4(Summer 1979): 1143–1170.
Despite the great practical importance of the idea of human rights, some of the most basic questions about them have not yet received adequate answers. Are there such rights? How do we know there are? What is their scope or content, and how are they related to one another? Are any of them absolute, or may each of them be overriden in certain circumstances?
These are primarily substantive questions which require that we show that persons have rights other than those grounded in positive law or social custom. For if human rights are rights all persons have simply insofar as they are human, then for those rights to exist is for there to be valid moral criteria which justify their existence. When we look for such criteria or principles, however, not only do we find that no single set has been universally accepted; more fundamentally, the very context or subject-matter to which one should look to resolve the disagreements of moral principle is itself involved in disagreement, which neither traditional nor contemporary arguments have satisfactorily resolved. Nevertheless, a non-question-begging subject-matter for morality can be found by considering the general concept of morality. Amid the various divergent moralities, all agree that they are concerned with actions. For all moral judgments, including right-claims, consist directly or indirectly in precepts about how persons ought to act toward one another.
How then does the consideration of human action serve to ground or justify the ascription and content of human rights? It does so when we take a dialectically necessary approach, which proceeds from what all agents (or actors) logically must claim or accept, on pain of contradiction. In brief, because certain objects are the proximate necessary conditions of human action, all rational agents logically must hold or claim, at least implicitly, that they have rights to such objects; since the claim must be made or accepted by every agent on his own behalf, it holds universally within the context of action.
More fully, every agent regards his purposes as good on some (not necessarily moral) criterion; hence he also regards as necessary goods the proximate general necessary conditions of his acting to achieve his purposes. From a consideration of these necessary goods or generic features of action (freedom and well-being) we get the ascription and content of rights; for when claimed by the agent himself from within the context of conative agency, there is a logical connection between these necessary goods and rights. In particular, the agent's claim is prescriptive, advocating that he have these necessary goods; it carries the idea that these goods are his due, that he is entitled to them; and it is made against others who are claimed to have the correlative obligations. But these claims are based on prudential criteria only and hence are prudential, not moral rights.
In order to establish that these claims are also moral rights we have to show that each agent must admit that all others have these rights as well. This we do by showing that the sufficient reason on which the agent must hold that he has rights to freedom and well-being (namely, that he is a prospective agent who has purposes he wants to fulfill) applies to every other prospective agent as well. Thus by the principle of universalizability, every agent must accept that every other agent has the same basic rights to freedom and well-being that he necessarily claims for himself. There is then a valid moral criterion or precept which justifies human rights and which every agent must accept on pain of self-contradiciton, the Principle of Generic Consistency (PGC): Act in accord with the generic rights of your recipients as well as of yourself.
The generic rights to freedom and well-being are further specified by analyzing their components. The right to freedom consists in a person's controlling his actions and his participation in transactions by his own unforced choice or consent and with knowledge of relevant circumstances, so that his behavior is neither compelled nor prevented by the actions of other persons. Well-being, viewed as the abilities and conditions required for agency, comprises three kinds of goods: (1) basic goods are the essential preconditions of action, the rights to which are violated when we are killed, physically incapacitated, or when others fail to give us aid when we need it and when it can be given at no comparable cost to those others; (2) nonsubtractive goods are the abilities and conditions required for maintaining undiminished one's level of purpose-fulfillment and one's capacities for particular actions, the rights to which are violated when we are lied to, stolen from, or subjected to excessively debilitating conditions of physical labor or housing; and (3) additive goods are the abilities and conditions required for increasing one's level of purpose-fulfillment and one's capabilities for particular actions, the rights to which are violated when we are discriminated against or when we are denied education to the limits of our capacities.
Since these various rights and duties may conflict with one another, they are only prima facie, not absolute. but the PGC sets the criteria for resolving these conflicts, both in its direct applications at the interpersonal level and in its indirect applications at the institutional level. These indirect applications serve in turn to justify social rules and institutions, including both the minimal state and the supportive state. Thus the PGC requires that three kinds of rights receive legal enforcement and protection: personal-security rights, social and economic rights, and political and civil rights.
“Ordering Rights Consistently: Or What We Do and Do Not Have Rights To.” Georgia Law Review 13, No. 4(Summer 1979):1171–1196.
As American courts have grown increasingly active in this century, and over the past two decades in particular, they have produced a veritable “rights explosion,” especially in the area of “welfare” or “social and economic” rights. But can all these rights be justified as a matter of moral theory—especially when they conflict with our traditional rights to liberty and property, and even with each others. More fundamentally, is the theory of moral rights consistent, or does it necessarily yield conflicting rights?
To sort these issues out, and in particular to determine what we do and do not have rights to, we have to look to the justificatory foundations of the theory of rights. For rights “exist” only insofar as that background theory shows them to exist, a theory that serves thus to sort out those right-claims that can be justified from those that cannot. When we look to that justificatory theory, however, or to the many versions there are of it, we find it up against the traditional problems of moral skepticism: certainly rights are not empirically determined “natural” features of the world, not in any straightforward sense at least; but if they are fundamentally assertoric—functions of interests or values, to be justified, as various positivist schemes would suggest, by “weighing” those interests or values—then moral cognitivism would seem to be all but impossible.
With the recent work in this area by Alan Gewirth, however, the cognitive and hence the justificatory foundations of the theory of rights appear to have been located and secured. Arguing along neo-Kantian lines, and following his dialectically necessary approach, Gewirth has shown that rights are inherent in the basic subject-matter of ethics, human action; in particular, they are functions of the claims to freedom and well-being that we implicitly though necessarily make in our voluntary and purposive behavior, claims that we must accept for ourselves as well as universalize to others, all on pain of self-contradiction. (See preceding summary.) In unfolding this “normative structure of action,” however, Gewirth seems very much to have overextended the argument, not unlike the modern courts. In particular, he argues that in acting we necessarily claim not only the freedom and well-being that are the generic features inherent in our actions and that underpin the traditional “negative” rights to noninterference; in addition, he says, claims to the positive beneficence of others are also inherent in our actions. Thus he goes on to argue for “positive” rights between individuals only generally related, and eventually for the supportive state and the social and economic rights that characterize that state.
As a matter of right, however, these positive general rights cannot be justified. For while it is true, as against the skeptic, that claims to non-interference are inherent in our conative behavior such that even the attempt to deny such claims entails our implicitly making them, with perfect consistency we can deny, as against the moral overreacher, that in acting we necessarily claim what is not ours to claim—the positive beneficence that belongs to others. When we look closely at the matter, we discover that the claims at the heart of the normative structure of action are fundamentally property claims—claims to the property we possess in our persons and our actions, the performance of which serve in turn to generate property claims in the world. As the classical theorists saw, then, however imperfectly, rights and property are inextricably connected.
While the generative arguments are fundamental to showing that we have rights and to showing what we do and do not have rights to, other arguments—from causality and consistency—serve also to show that there are no positive general rights. Like all universalization principles, Gewirth's Principle of Generic Consistency (PGC) is a causal principle: but without the explicit prescription to act, which it does not contain, the PGC cannot impose positive obligations; for the notdoings it permits, because they involve no changes in the world, are not causally efficacious and hence cannot be prohibited on the causal grounds implicit in the PGC. Moreover, because the PGC does permit the liberty of doing nothing at all and hence of being left alone, it cannot with consistency impose positive duties; for these would conflict with the right to noninterference that the PGC does entail. Thus at bottom the PGC is a principle of equal freedom, defined in terms of equal rights to be left alone in one's person and property.
Using the theory of general and special relationships, then, as well as a theory of alienation, all of which is implicit in the PGC and its background arguments, we can unfold the world of rights and correlative obligations entailed by the PGC. This involves showing how property in the world arises and how this property, together with our property in our persons and our actions, serves in turn to delineate our rights of contract and association, including familial association, as well as our rights against torts and crimes. In each of these cases the idea is to give some real content to our rights by locating their foundations in property and by defining their violations as takings of that property. Thus is ambiguity avoided and consistency preserved, at least as much as is possible; and thus are the points at which the theory of rights must turn to the theory of value located—in particular, in the areas of nuisance, endangerment, enforcement, and rectification.
It emerges then that the world of “first-order” rights entailed by the PGC—as opposed to the “second-order” rights of enforcement and procedural justive—is a consistent world in all but rare cases. Nevertheless, it is also a strict world—rooted in principles of reason, not in the (moral) sentiments. Accordingly, there may be occasions when we will not want to live with the strict requirements of the theory of rights, when we will want to turn to the theory of good by way of overriding rights. When we do so, however, we should be candid enough to admit that it is not by right that we override rights; rather we do so in violation of rights. And because the theory of value—and in particular the idea of “social value”—admits of nothing like the cognitive foundations of the theory of rights, we should override rights as a matter of law only on rare occasions, the better to preserve the point that the theory of rights describes the moral order in the broad range of ordinary cases, serving thus as the objective model for most of our law.
“Corporations and Rights: On Treating Corporate People Justly,” Georgia Law Review Vol. 13, No. 4 (Summer 1979):1245–1370.
The growth of the modern business corporation over the past century has encouraged an ever larger number of economic, legal, and historical studies of this institution—some critical, others defending it. Critics such as Berle and Means, for example, have questioned the “legitimacy” of the large corporation by pointing to the separation of ownership and control, to the “power without property” that characterizes corporate behavior. Ralph Nader and others have recently resurrected the “concession theory” to claim that because the state “creates” the corporation it has a right to regulate it “in the public interest.” In response, defenders of the corporation have argued it already serves the public interest and hence should not be further regulated. Their case, that is, has rested primarily on economic grounds alone. In fact, with the exception of Robert Hessen's In Defense of the Corporation, which focuses primarily on historical issues, we have had no systematic normative theory of the corporation, one that could tell us whether and why the corporation has rights, what rights (and obligations) it has, and what rights are held by those who interact with it—shareholders, directors, managers, employees, consumers, and members of the public generally.
The development of such a normative theory requires us first to set forth a theory of individual rights and obligations; for corporate legitimacy—the right of the corporation to exist—is a function of the legitimacy of the acts that bring the corporation into being. The theory of individual rights advanced here rests upon recent work in the theory of action which shows, along neo-Kantian lines, what it means for individuals to have rights, that they have them, and what things they do have rights to. More specifically, individual rights are functions not of values or interests but of property claims in oneself, one's actions, and in the world, which when explicated serve to underpin the broad areas of the common law—and in particular, serve to underpin the rights of association, delegation, and contract, the exercise of which lead to the creation of the corporation.
Corporations are legitimate, then, when they arise through the exercise of individual rights: i.e., when they arise without violating the rights of others. This means that each of the “features” of the corporation from entity status, to perpetual life, to limited liability for contracts and torts must arise and characterize corporate behavior in such a way as not to violate rights. This can be shown even in the case of limited liability for torts, which has always seemed an unjustifiable corporate feature; for close examination of the respondant superior doctrine, upon which this feature ultimately rests, shows that far from being a violation of the rights of tort victims, whose claims are held against individual corporate actors, limited liability for torts, absent contractual arrangements to the contrary, is a violation of the rights of inactive shareholders. As a corporate feature, then, limited liability for torts can be justified vis a vis shareholders when it arises through contract.
But if corporate legitimacy, i.e., the right of the corporation to exist, is a function of the individual acts that create this institution, then the further rights of the corporation are likewise functions of the individual rights of the corporate owners.
These owners do not lose their individual rights, that is, simply because they exercise them in concert. Corporations, then, have all and only those rights that their owners first have to exercise through this institution, those rights that the owners can exercise through such an institution, and those rights that the owners have chosen, in their articles of incorporation, to be exercise through their corporation. When applied to the wide range of issues before the corporation today, this analysis shows that corporations have most of the general rights of speech, action, and association that individuals have, all of the general obligations of noninterference (as with nuisance and endangerment) that individuals have, and all of the special rights and obligations of contract they choose to create. These findings apply to contexts as various as those involving production and pricing, hiring and firing, discrimination, pollution, product and workplace safety, disclosure, antitrust, bribery and kickbacks, board selection, going private and shareholder freezeouts, corporate gift-giving, and many more.
Although the theory of rights is only one domain of ethics, it is, for epistemological reasons, well suited to serve as a model for legal sanctions. Beyond rights, however, there are considerations of value, which is where issues of “corporate responsibility” appropriately arise. It is in this domain that the tension between civic virtue and economic survival comes to the fore, raising fascinating problems as it does, though these are problems best left to private solution.
“Privacy: Its Origin, Function, and Future.” Paper delivered at the conference, The Economics and the Law of Privacy, University of Chicago, November 30–December 1, 1979(Working Paper #166).
Privacy, more than mere secrecy (an information preserve maintained about oneself) is conceptualized in terms of autonomy within society. Secrecy is only a component of privacy in the more fundamental sense articulated by Hirshleifer. Privacy goes more to the overarching social structure and ethos or supporting social ethic.
Hirshleifer's purpose is to examine the sources of the social consequences of the origin of the “taste” for privacy. Recognizing that culture, although important does not totally displace biological disposition, he contends that the three social principles of dominance, sharing, and private rights have evolved in Nature as adaptations of social niche. The taste for “private rights” is assumed to be an ingrained attitude laid down on eons of primate heritage.
It is, the author contends, a serious over-simplification to distinguish between “selfish” and “unselfish” behavior, between private goals and public goals, etc. While man does have egoistic, purely selfish drives, his social instincts are more complex, involving at the least the three distinguishable principles of dominance, sharing, and private rights. These ethics have evolved and have become ingrained in the human makeup in association with various forms of social organization over the history of mankind, with each structure being adaptive to the ecological parameters in which human association has taken place. While the taste for privacy in a given incident may represent nothing more than a selfish claim, insistence of one's own rights is also part of a two-sided ethic involving willingness to concede corresponding rights to others, and even willingness to participate as a disinterested third-party enforcer against violators.
Hirschleifer's paper is in an anthropological mode and follows an evolutionary model of analysis. He concludes that doubts should be raised as to the wisdom of maximizing aggregate wealth as the criterion of social policy, and that while it is conventional to deplore merely “commercial” ethics, societies organized on the privacy ethic “have given a good account of themselves historically...in terms of values we consider civilized.” Refusing to forecast the future prospects of privacy, as a social structure balancing individual autonomy with communal responsibility, he concludes that “they don't look very good!”
“Privacy and the Limits of Law.” Yale Law Journal 89(January 1980):421–471.
Reductionists err in thinking that claims about privacy can be reduced to other claims: Gavison argues that privacy is a distinct value worth protecting legally.
Privacy has three irreducible components. Someone loses privacy to the extent that others (1) gain information about him (loss of secrecy), (2) pay attention to him (loses anonymity), or (3) gain physical access to him (loses solitude). All three criteria can be combined in one concept which we can call accessibility. If an individual is totally inaccessible to others, he would maintain perfect privacy.
Gavison's conception of privacy is opposed to other more general conceptions that see privacy as the right to be left alone or which equate invasions of privacy with assaults on human dignity. The first idea is too broad—it would cover virtually everything—while the latter is false (e.g. being forced to beg to survive may violate one's dignity without a loss of privacy. Also, privacy is a claim to noninterference with one's personal decisions misses the point, since one's objection to certain privacy violations (e.g. a central data bank) have little to do with the personal nature of the information acquired.
To see the value of privacy requires a functional analysis (i.e. we must see to what extent privacy promotes other things we value). Clearly, both total privacy and non-privacy are undesirable. Some degree of privacy is central to realizing an individual's goals under any theory of individual development. Privacy helps us to promote learning, creating, practicing, in that it enables one to concentrate and make mistakes with less pressure; and it helps to promote liberty in that it allows one to do certain things without fear of hostile or unpleasant reactions from others.
But, if there is some social norm which privacy allows one to violate, why not change the norms if we think they are undesirable or eliminate privacy if the norms are desirable? We cannot, however, change positive morality so easily, and privacy allows nonconformists to protect their lives against excess pressure. Thus Gavison argues against those like Posner and Epstein who see privacy in such situations as merely the opportunity to deprive others of information. Privacy in these instances, insists the author, comes about in the interstices between the need for diversity and the hostility that such diversity often provokes.
Reductionists, basing their argument on present day legal decisions, claim there is no distinct value of privacy. The author argues this approach is biased since the lack of coherence in judicial decisions can be explained by other facts—namely that other interests in addition to privacy have been invoked in the decisions. Furthermore, the reductionists approach is biased since it studies only actual court decisions. Yet there are strong disincentives for not going to court over privacy violations. These disincentives are threefold: law is a public mechanism and this strongly dilutes the point of using legal means to rectify privacy violations; also many people don't know about privacy violations and finally the remedies, such as rewarding monetary damages, may be totally inadequate given the psychological damage.
Finally, the reductionist approach makes privacy seem like a recent notion, since this approach only focuses on cases which invoke the privacy concept. But in fact the need for privacy is virtually coextensive with the human condition.
Gavison thinks her account, by beginning with extra-legal concepts, can show privacy to be a coherent concept, and thus will help to explain both the actual interest in, costs of, and functions of, laws involving privacy.
Given the above, it follows that a strong case can be made for the law containing an explicit commitment to privacy. Losses of privacy are generally undesirable. We need to reform the parts of the legal system which deal with privacy but do not as yet explicitly recognize this fact. Such a commitment to privacy would also give us protection against future invasions which are likely to occur given the increasing technological means of invasion.
“Towards An Autonomy-Based Theory of Constitutional Privacy: Beyond the Ideology of Familial Privacy.” Harvard Civil Rights-Civil Liberties Law Review 14(Summer 1979):361–384.
This article focuses on the question of “what interest does the right of privacy protect?” Eichbaum argues that a familybased right of privacy is constitutionally and philosophically unsound. It is contended that a privacy right grounded in “conventional” interests of marriage and family (rather pejoratively, it seems, styled as “ideology” since the more desirable grounding in “individual autonomy” is not so labelled) seriously limits the “human dignity protected by constitutional guarantees” in that it “champions societal institutions and values and therefore perpetuates the status quo” rather than fostering freedom “to choose and adopt a lifestyle which allows expression of...uniqueness and individuality.”
According to the author, while a family-based right fits harmoniously with majoritarian sentiments, a family-based privacy right ultimately disserves both privacy and the family. Not only does it perpetuate the myth of family as haven from society, it shields the family from just damnation which it deserves both for its inadequacy in living up to the hallowed concepts with which it is intrinsically associated and accorded deference, but also for its utility as a backdrop of conventionality by which alternative lifestyles are judged as less than desirable and/or appropriate or acceptable.
Conversely, an autonomy-based right of privacy is a potent force for protection against discrimination or for remedying lack of official favor based on sex preference and behavior and other styles of normative deviance. Philosophically, this frees the notion of privacy from entrapment in the vision of a person as an instrumentality toward the higher goal of the abstract family unit (thereby violating the basic principal of individual equality and autonomy which is at the core of a civil right) and counteracts the discriminatory conceptualization of individuals as members of a family unit “by reducing their human significance vis-à-vis the abstraction of which they are a part.”
The Supreme Court's decisions are said to manifest a disturbing ambivalence toward the autonomy-based formulation of privacy in the indication that certain forms of perversion may be regarded as beyond the outer limits of that which will be adjudged as acceptable behavior according to the normative principles of the judges. For example, it appears that the Court, at least for the time being, has chosen to avoid extension of the “right of constitutional privacy” to cover consensual sexual conduct between adult homosexuals. Recent Court decisions have “clearly served to extend the autonomy-based right of privacy” by invalidating state legislation that prohibits the distribution of contraceptives to children. The author finds it nevertheless a disturbing indication that “the reproductive autonomy of minors and unmarried individuals is protected only because it is somehow incidental to the protection of marital privacy, while consensual sexual conduct between adults of the same sex is not protected because it does not implicate reproduction or other incidences of marriage.” Eichbaum concludes that:
“...The Court's apparent attitude toward consensual sexual relations...is in need of change in order to conform to an autonomy-based conception of privacy. It is to be hoped that the Court will, in time, fully commit itself to the autonomy-based conception of privacy which many of its decisions already suggest. Anything short of an autonomy-based right, whether or not its expression occurs in the context of the family, is a sham when advanced as a civil right....”
“The Right of Privacy and Freedom of the Press.” Harvard Civil Rights-Civil Liberties Law Review 14 (Summer 1979):329–360.
The right of privacy as an independent concept made its first appearance in American law as a civil tort whose remedy was suit for damages or injunction in protection against unwarranted invasion of the “right to be let alone.” Its first articulation was the renowned article by Warren and Brandeis in the Harvard Law Review (1890). As an alleged and court decreed constitutional right against state interference with inner zones of space pertinent to “individual dignity and autonomy,” Griswold v. Conn. (1965) is the premier case.
While freedom of the press has a long and well-established history in American law, the theoretical foundations of the right of privacy remain relatively unformed and inchoate. Against this background, coupled with such modern developments as the increasing scope of governmental intercession in most areas of national life, the development of modern technology for ferreting out and monitoring everyone's affairs, and the closing in of physical and psychic space for the average person, the need for creation of an adequate law of privacy is imperative for the future health of society.
Emerson concedes that on most points the law of privacy and the law sustaining a free press are mutually supportive features of the basic system of individual rights. There are, however, two major areas when accommodation must be developed. One of these is where the privacy right comes into sharp contrast with the right to publish. The other involves the rights of the press to obtain information from the government which confronts an individual's claim that data about one's personal affairs may be inappropriately disseminated by invocation of right-to-know principles.
In examining the conflict between the status of the law concerning the right to publish and the privacy tort, Emerson surveys theories of the right to privacy, and traces the formulation and application of legal doctrines and remedies. With respect to the conflict between the right of privacy and the right to obtain information, Emerson examines the legal basis and right of the press to obtain information, the legal basis of the right of privacy, and the application of privacy protection.
His overall objective is to make some progress toward formulating an accommodation between the freedom of the press which has an ancient lineage in our jurisprudence and the right of privacy which has developed out of the needs of a technological civilization. Both are now vital features of our system of individual rights. Emerson suggests that a balancing can best be achieved by focusing more on the privacy side of the equation than has been done in the past since the press is strong, healthy, and well-organized, whereas individuals whose privacy is at stake are scattered and weak.
“Social Science Influence and Its Inter-Relationship With the Criminal Justice System: Law and Institutional Practice.” Journal of Constitutional and Parliamentary Studies (India) 11 (January-March 1977):50–74.
The social climate of the 1960s modified the social order and the institutions of social order. One modification emphasized law as an instrument for ameliorating social problems. Radical or critical criminology emerged out of the sociohistorical events that influenced this intellectual climate of the 1960s and early 1970s.
Radical or critical criminology presents the prospect of making a profound impact on academic, popular, and judicial thought about crime and society. It has shifted the focus away from crime, the criminal, and criminality toward a focus on agencies and agents who deal with crime. From this perspective criminal law and its enforcement function as instruments for the control of one social class by another. This new perspective has helped focus attention on how the normative content of criminal law is internalized in different segments of society, how norm holding is related to behavior, and the nature of the legal, social, and administrative apparatus designed for the control of crime.
The author reviews the broader arena of social science and court decisions which have influenced criminal justice; noncourt influences of social science on the criminal justice system; the relevance of the victim; post-adjudicatory processes; the judicial concept of the sentencing role; the philosophy of social reconstruction; the irrelevance of justice; the sociology of law; public attitudes and policy implementation; as well as models of criminal justice.
“Contract Law and Distributive Justice.” The Yale Law Journal 89, 472(1980):472–511.
The law of contracts has three agreed upon functions: specification of arrangements which are legally binding; definition of rights and duties created by enforceable agreements; indication of consequences for unexcused breach. Beyond these universally recognized functions, however, has been the suggestion that the law of contracts should be used as an instrument of distributive justice in a selfconscious effort to achieve a “fair” distribution of wealth.
In the libertarian view, the state is never justified in the forcible redistribution of wealth. Many liberals, though believing in the desirability of compulsory reallocation of wealth by politically derived favor, are also opposed to the use of contract law as a mechanism for redistribution because distributional objectives are better achieved through the taxation and reallocative process of the state. Kronman, by contrast, argues that the nondistributive concept of the function of contract law is not supportable either on libertarian or liberal grounds, and attempts to articulate and defend a position that a conscious teleological use should be made of contract law for the purpose of accomplishing desirable redistribution.
Kronman challenges the univeral validity of the libertarian position by questioning the justice inherent in the voluntariness of exchange. He adopts a working postulate that when a libertarian asserts that contract law should not be used to redistribute wealth (he assumes that the redistributive direction is from rich to poor), his position is reducible to one of two claims: either existing inequities are justified, or that contract law is an unsuitable instrument for correcting inequities which are unjustifiable.
Noting that wealth may be redistributed through taxation or through contractual regulation, the author challenges the alleged liberal preference for taxation by questioning whether it is either less restrictive of individual liberty or more neutral than is contractual regulation. His ultimate conclusion—assuming one desires to utilize a coercive apparatus in the form of state rules and regulations to reallocate the product of one group or class on behalf of a more favored group or class (making the only viable question that of how the redistribution of product should be accomplished)—is that both taxation and regulatory control of private transactions are equally appropriate, and the choice between them ought to be made on the basis of the situation.
“In Defense of a Hobbesian Conception of Law.” Philosophy and Public Affairs 9(Winter 1980):134–159
Against H.L.A. Hart's objections, Ladenson defends a Hobbesian conception of law where the subjects habitually obey the sovereign who obeys no one. Since the Hobbesian conception of law leaves out the legitimacy of governmental authority, Ladenson proposes the following remedy. Governmental authority involves both governmental power—the ability to keep peace within a social group—and more importantly, a right to rule.
Ladenson denies that all rights have correlative duties. Only rights which involve claims made against other individuals have correlative duties. The right to rule is a “justification right.” In the case of governmental authority, the right to rule gives the government the right to engage in coercive action which would otherwise be impermissible if exercised by private citizens. Ladenson claims this right to rule is justified by human nature and the desire to avoid mutually destructive conflicts.
Since the right to rule, which is part of governmental authority, involves no claim that the citizens must obey the law or that any use of coercive power by the state is justified, one could uphold the government's right to rule while still maintaining the government ought not to exist. Ladenson claims this accords with common sense. For instance, this concept would imply that the Nazis, not private citizens, had the right to arrest traffic violaters, while also implying that the citizens had the right to resist Nazi rule.
Can the Hobbesian conception account for the persistence and continuity of law? As to persistence, if law emanates from the sovereign, why can laws made by an earlier regime still be law under later regimes? There could be a general policy that such laws will have effect unless explicitly overturned.
As to continuity, Hart raises two objections. According to the Hobbesian, a successor to an accepted ruler cannot be considered a sovereign at first, since we do not know if the subjects will accept him. Ladenson replies that such acceptance can begin immediately if there is a strong tradition of succession.
Hart's second argument is that the successor's immediate accession to the throne does not stem just from subjects' acceptance but from a right to make law grounded in a rule of succession. But, Hobbesian theory has no room for such a right. The author responds that the notion of sovereignty includes the right to rule. If the successor inherits this right to rule, the alleged rule of succession is irrelevant; if the successor has an uncontested right to rule he can exercise governmental power to make law.
Hart's most serious objection against a Hobbesian conception of sovereignty holds that it cannot give an account of a legislative power which is constitutionally restrained by the courts. Such a system would transcend sovereign laying down and upholding laws. Ladenson replies with a reconstruction of Hobbes. In the spirit of Hobbes, each branch of government could have sovereignty in carrying out its functions, but be subject to the sovereignty of other branches as they carry out their own functions. If it is true that each branch has the power to carry out their own functions and each resists encroachment on its terrain by other branches, then each branch has governmental power to maintain peaceful relations.
“Civil Disobedience in the Modern World.” Humanities in Society 2(Winter 1979):37–60.
The author discusses the question: does one have the right to commit civil disobedience? By civil disobedience Feinberg means disobedience to political authority, done from the motive of devotion to a higher cause, which violates the laws of the state, and which leaves one open to criminal sanctions. Civil disobedience differs from ordinary crimes which are committed from the usual notions of greed, desire to harm others, etc.; and it also differs from revolutionary acts against the state, since the disobedient by and large respects the political structure. Civil disobedience also differs from a situation where one desperately breaks the law in order to avoid a greater evil (e.g., borrowing a stranger's car without permission in order to get a heart attack victim to the hospital); since courts generally accept pleas of necessity to exonerate such crimes, the lawbreaker is not being a civil disobedient. Finally, Feinberg distinguishes civil disobedience from persons who deliberately break a law for the purpose of a “test case,” since that person sees himself as performing a legal act and hopes that the courts will declare the law he has broken to be an invalid law.
Most philosophers today no longer believe one has an obligation to obey any law just because it is a law; those that do believe there is such an obligation believe it holds only in a reasonably just and fair society and that it is a prima facie obligation (PFO). Feinberg interprets the notion of a PFO as a relevant supporting reason which may not be a conclusive reason since there may be reasons on the opposite side of the issue. When one has a PFO to do A, not doing A is wrong unless there is a reason not to do A which is at least as strong. If there is PFO not to engage in civil disobedience, then it is presumably derived from more basic PFOs such as that of gratitude (to return favors), fidelity (to keep promises), fair play (not to exploit, cheat, or take advantage of others), and that of justice (to maintain, uphold, and help establish just institutions). Feinberg rejects all of these attempts.
The argument from the PFO of gratitude is that by accepting the state's benefits one incurs debts of gratitude. If this argument is supposed to show the ground for the genuine feeling of gratitude, then, it must show that the state's benefits are gifts and that they were given from benevolent motives; even if this is shown, there can be no duty to feel grateful; and even if there was such a duty, it would not imply a duty to reciprocate. Finally, even if there was a duty to reciprocate, it would not take the form of obedience to the laws, for no gift entails a right to direct the receiver's behavior.
The argument for the PFO of fidelity sees the state's benefits as more like loans than gifts; since we have consented to take the loans (i.e., consented to obey state authority), there is a PFO to obey the state's laws. Since no one really grants such consent, the argument quickly shifts to the claim that one has tacitly consented, i.e., that one's consent can be inferred by one's continued residence in the county or by acceptance of the state's benefits. But one cannot consent unintentionally or unknowingly. And even if we design a political procedure whereby continued residence would be taken by the citizens to be equivalent to consent (the way silence at a meeting, when the chairman says “Anyone who objects has a minute to speak now,” is taken to mean consent), the opportunity to emigrate does not provide one with a genuine choice that the argument from consent requires.
The argument from the PFO of fair play is that citizens obey the law on the belief that others will do so, and therefore that the burdens of complying with the law will be spread fairly throughout and will be congruent with the benefits achieved by social cooperation. On this view the law breaker takes advantage of his fellow citizens, and thus the PFO is owed to them and not to the state. While this view may be plausible for certain forms of civil disobedience, not all forms of civil disobedience can be construed as taking advantage of anyone. Furthermore, insofar as the benefits deriving from obeying the laws fall unequally on different groups, then the argument from fair play can have little point for those who receive few benefits.
The argument from the duty to uphold just institutions has again some point with regard to certain types of laws, but again not all law breaking or civil disobedience can be construed as damaging just institutions. Insofar as civil disobedience or law breaking helps to provide a safety valve for societal pressures, it can be thought of as strengthening just institutions.
We may conclude that there is no PFO to obey the laws of a reasonably fair and just society, and thus that at least some forms of civil disobedience are justified. While in such a society most of the laws protect things that are worth protecting (e.g., avoiding harm to others), and thus should be obeyed, this is not because they are laws but because they legally protect certain important values. In such a society, there may be a statistical presumption that the law breaker is committing a wrong, only because of the laws' connection with these values.
“The Practice of Law as Moral Discourse.” Notre Dame Lawyer 55(December 1979):231–253.
A lawyer's conversation with a client is a moral or ethical conversation since it always involves notions of obligations, rights, the legitimate use of coercion, and damages. There are three ethical models for the lawyer's attempts to represent the client.
First, the ethics of role. On this view, the lawyer serves the client's own good, either following what the client wants or paternalistically telling the client what to do. The former approach involves a laissez faire or adversary ethic argument (the lawyer need not worry if his client is right, for the adversary system balances interests). The latter approach involves lawyers either imposing some moral standard on ordinary people or representing the worst off's interest in order to promote a better society. In either case, the ethics of role is inadequate, for both approaches serve the system. The first case justifies suspension of judgment on the grounds that the adversary system will straighten everything out, in the second case, justifies paternalism by the claim that the lawyer is supposed to be an agent promoting better social norms. Both cases hold the misleading assumption that questions of conscience or the limits of power are irrelevant to the lawyer-client relation.
The second model of lawyer-client relations is the ethics of isolation. Here if the client wants something which the lawyer disapproves of, the lawyer announces this is so (e.g. “I won't make up a will which disinherits your wife and children”). The client then either changes his mind or provides some reasons for what he is doing. The lawyer then either accepts this or refuses to take the case. This approach involves assertion of the lawyer's conscience, but it involves assertions rather than reasoning on both sides. The lawyer and the client do not really communicate or influence each other. Such moral insulation tends to lend a false confidence to the stature of one's moral principles. In short, the approach is totally risk free for the lawyer—his moral world will remain invulnerable.
The third model (which Schaffer prefers) is the ethics of care. Here the lawyer and the client have a genuine moral conversation with both sides respecting one another's autonomy. Borrowing from Gerald Dworkin, Schaffer suggests this involves allowing the client to reflect on his interests, providing him with the requisite information and not deceiving him, and using methods which involve collaboration. Because of the openness and mutuality of this conversation, moral communication may take place.
Sound methodology, appropriate to studying man in all of his complexity and individuality, is necessary if history, sociology, and related disciplines are to portray human nature accurately and live up to the ideal of being true social sciences. In the summaries that follow, we will encounter the debated methodological issues of historism, structuralism in history, the mutual interplay of interpretive theory and historical fact in historiography, the modes of discourse involved in “philosophic history,” hermeneutic or interpretive theory, and phenomenology in the social sciences. The reader may find related issues of methodology dealt with in the July/September 1978 issue of Literature of Liberty, pp. 32–44.
“Historism: Its Rise and Decline.” Clio 8(Fall 1977): 25–39.
Historism (or historicism), whose origins are commonly traced back to the end of the eighteenth century, has been credited with the formation of modern “historical consciousness.” According to this view, the cultural environment in which we live and move makes historists of us all—whether we are aware of it or not. In view of this reputed influence, Prof. Gruner endeavors to isolate those elements which comprise the historist point of view and attempts to ascertain what it has in fact contributed to modern historical perception.
Gruner begins by dismissing two common characterizations of historism as “process thinking” and “a historically oriented world view.” These descriptions are far too sweeping and highlight traits which are not at all unique to historism. Gruner points instead to the concept of individuality as the element most essential to the historist movement.
Historism arose as a reaction to an Enlightenment historiography in which past events and historical periods were viewed almost exclusively as stepping stones to the Age of Reason. The eighteenth century was thus enthroned as the goal and climax of all previous history. Historism, by contrast, considered the phenomena of history as “individual totalities” which cannot be explained by the sum of their parts nor by factors outside themselves. It was not, therefore, by chance that Meinecke chose Goethe's dictum Individuum est ineffabile as the motto of his work on the origins of historism. Burckhardt's Civilization of the Renaissance, a classic historist treatment of an individual period of history, provides a majestic and insightful account of the Renaissance without considering how it had developed out of other periods or how it fitted into the larger context of Western civilization.
In historism process is not eliminated from consideration but, from this perspective, it arises organically out of the category of individuality. Thus, individuality is no longer subservient to a developmental flow.
By emphasizing the irreductible nature of the individuum, historism gave rise to a relativist spirit in historical scholarship. For example, under its influence, historians reacted strongly against the notion of an essential human nature which endures through time and circumstance. The maxim “Man has no nature but only history” summarizes the historists' resolve to prejudge nothing and to deal with each datum in its own terms. This design admits of only partial achievement, however, since every evaluation involves an inevitable (though perhaps tacit) comparison with other “individualities”. Historists dealt with this undeniable tension in their position by a resolute commitment to an openness to experience.
Considering modern historiography, Prof. Gruner finds that the influence of historism has almost vanished. Contemporary historians do not attribute much importance to the notion of individuality. They are rather intent on establishing theories of development—a task in which independent individual totalities have little place.
In the final analysis, the interest of modern historians is a practical one. They esteem knowledge, including historical knowledge, first and foremost as a means to improve the human state—not as an end in itself. In this, Gruner finds that they are much more in tune with the general tenor of modern thinking than the followers of historism. In contrast to Meinecke, therefore, Gruner regards historism as a passing episode in the evolution of historiography rather than as the great shaper of modern consciousness.
“Foucault, Structuralism, and the Ends of History.” Journal of Modern History 51(September 1979): 451–503.
Our century has witnessed a widespread rebellion against historical consciousness and, as a result, history can no longer lay claim to the central intellectual position to which it aspired in the nineteenth century. With the passing of the ideas of progress and of organic or dialectical development, a spirit of discontinuity has inclined intelligent men and women to “more relevant” disciplines.
Ironically, the French historian Michel Foucault figures among the most active leaders in the movement to demolish the historical tradition. Foucault has frequently been tagged a “structuralist” by those seeking a convenient encapsulization of his thought. Nonetheless, the protean nature of work from the Histoire de la folie (1961) to the Histoire de la sexualité (still in progress) effectively blocks any attempt at easy labelling.
In summary form, one can but outline the areas of subtle discussion of Foucault raised by Megill. Megill begins by attempting to situate Foucault within the context of the structuralist movement— along with such prominent structuralist thinkers as Ferdinand de Saussure, Claude Lévi-Strauss, Jacques Derrida, and Jean Piaget. He distinguishes between a “structuralism of the sign,” with its insights into the nature of language, and a “structuralism of structure,” concerned with the organization and interrelation of bodies of knowledge. Foucault shares points of contact with both types of structuralism, but also transcends these categories.
Foucault rejects the traditional historian's attempts to recreate past events “as they actually occurred.” Instead, Foucault seems increasingly to favor a position in which the portrayal of the past is consciously used as a force to mold the present. This constructionist approach to history may, in part, be traced to the decisive influence of Nietzsche upon Foucault's thought. Nietzsche flatly rejected the nineteenth century cult of history with its tendency to freeze the past and, thus, also the present within rigid and lifeless Apollonian categories. The mythic, ecstatic spirit of Dionysus had been denied its rightful role in recalling the past, thus imposing the shackling “little circles” of Apollonian thought on Western history.
Prof. Megill goes on to discuss the balance of Apollonian and Dionysian elements in Foucault's work. While many Apollonian images appear in the historian's writing (vision, light, the gaze, stasis, etc.) Megill asserts that Foucault's multifaceted oeuvre reflects an essentially Dionysian spirit. The historian's fascination with the subject of madness and its leavening influence in culture is but one reflection of this preponderant mentality.
“Pocock and Machiavelli: Structuralist Explanations in History.” Journal of the History of Philosophy 17(July 1979):309–318.
Professor Geerken analyzes and criticizes the methodology of J.G.A. Pocock's The Machiavellian Moment. The word “moment” denotes a concern with structures of thought across time rather than developing or evolving through time. Unconcerned with intellectual or political history in the conventional sense, Pocock focuses on a concept that he thinks reappears throughout history. This concept deals with the problem of a republic remaining politically and morally stable within a steady flow of events which appeared to destroy all secular systems. In Machiavelli's era, this “moment” was related to Christianity's conception of time as essentially uncontrolled contingency which swept all non-universal phenomena away, particularly the republic. Whereas monarchies were seen as universal, outside of time, and thus as imitating the order of nature under God, republics were seen as subject to fate, fortune, and ultimately self-destruction.
Pocock defines the solution which humanists (who wished to revive the republican ideal) advanced was to relate particulars to universals by reintroducing Aristotelian political philosophy: specifically, the concepts of the polis, the classical citizen (zoon politikon), and the idea of inherent equality of rights. The idea of the polis was that it was a universal with regard to values but particular with regard to its spatial and temporal location. The polis was a universal in participation rather than in contemplation, which implied the idea of equality of rights inherent at birth and equal participation in governing and holding of office. Similarly, the idea of law underwent a shift in the republicans' hands: law possessed the universal aspects of rationality, deducibility, and generality combined with the particulars of custom and circumstances. All law had a universal essence and the key question to ask was: how well did the particular law fit the circumstances in this particular nation?
Geerken finds Pocock's structuralist explanation unconvincing. Roman law (the tradition which dominated continental legal thought and practice) did not begin with the individual or with inherent rights as a basic unit, but with a group or corporation which conferred rights and maintained liberty under law. The Roman idea was that liberty could exist under both an aristocracy and a republic, as long as freedom under law prevented arbitrary despotism. Because Pocock oversimplifies the relation of particularity (contingency) and universality, which Roman thinkers reintroduced, he cannot correctly interpret Machiavelli. Pocock considers Machiavelli ambiguous because he saw time (fortuna) as both constructive and destructive and similarly saw virtu as both a force within oneself for combatting fortuna and as a force which could create uncontrolled contingency through innovation. Pocock also reads The Prince as a “Hobbesian” work in which a new leader innovates in a world lacking any structure of law. In fact, says Geerken, Machiavelli saw the state as an organic body which in order to keep all the parts in order must vary its techniques. At times virtu requires force and coercion to deal with the bestial men (hence, The Prince); at other times the subtler coercion of law is all that is needed. This is a Roman viewpoint which regards law as an instrument; here Hobbesian categories are simply inapplicable. Thus virtu can sometimes help to tame fortuna, but since law is only an instrument, it can sometimes fail and push fortuna along; hence the destructive side of virtu.
“Political Theory and Historiography: A Reply to Professor Skinner on Hobbes.” The Historical Journal 22, 4(December 1979):931–940.
Warrender addresses Professor Quentin Skinner's criticism of those theoretical interpretations of Hobbes (such as Warrender's) which allegedly ignore the historical climate of ideas and thus distort interpretations. Warrender has claimed that Hobbes was essentially a natural law thinker. If Skinner believes this is a historically absurd interpretation, what historical standard is he interested in? If the standard is the history of natural law, then Warrender's judgment is justified: Hobbes believes law involves principles which apply to all persons, that these laws are knowable by reason, and that they are superior to laws of individual states.
If the standard is Hobbes's political and cultural milieu, then several questions arise. First, whom do we select as Hobbes's contemporaries? Skinner has cited some half dozen theorists who claimed Hobbes as one of their own, as evidence against Warrender's view. But against this, we must set Hobbes's repeated references to the law of nature, Hobbes's more sophisticated interpreters (e.g. Puferdorf), the fact that no one even suggested that Hobbes abandoned natural law until the late nineteenth century, and the fact that even the de facto theorists were not clearly opposed to natural law since they did not believe that political obligation could be validated by civil law alone.
Another question asks from what standpoint we should understand “historical absurdity.” If history is an account of what happened, then we cannot ignore the fact that Hobbes wrote frequently about natural law nor can we dismiss Hobbes's references to God and the Bible as mere window dressing to conceal his intentions. If we are to take the historical evidence seriously, we must account for all of the elements in Hobbes's thought—both the power analysis, the functional interpretation of sovereignty with its de facto implications, as well as the elements listed above.
But if we are to take all of these elements into account, we need not simply a historical but also a theoretical account of how they all fit together. The fact is that Hobbes frequently makes extreme and drastic statements (e. g. “Covenants without swords are but words.”) which sound absolutist, but are qualified in many ways (e. g. the individuals' right to self-defense, to run away in battle, not to accuse himself, etc.). These qualifications may only make a slight ripple in the history of political ideas, but they are vital if we are to understand Hobbes.
“Towards a Typology of Historical Discourse: The Case of Voltaire.” MLN 93(1978): 938–962.
A study of Voltaire's historical method demonstrates how his writing dispels the general myth of history as a passive, mimetic discipline which focuses solely on the recreation of past events. Using Voltaire's works as a model, Prof. O'Meara has isolated what she considers the three main components of written history: narration, commentary, and persuasion. Further research should demonstrate that these three modes of discourse exist in all historical writing—but in different forms and different proportions.
Like all histories, Voltaire's works incorporate objective narration which attempts to recreate events as they have actually occurred in the past. However, Voltaire stresses that, in his brand of histoire philosophique, narration demands careful selection and organization of material. Voltaire despised the “fables” of ancient historians who recount uncritically the most unlikely exploits and miracles. As his standard of selection, Voltaire chooses la Nature, who, if we study her closely, provides us with grounds for eliminating exaggerated and even unseemly events. As reasonable as this may sound, a closer examination reveals that Voltaire's concept of nature is strongly laced with bourgeois values and philosophe rationalism. As a result, by the standard of “nature”, Voltaire sees fit to reject as unreasonable any account of incest in royal families.
Voltaire regarded history as a pedagogical tool whose primary usefulness lay in its power to mold the future. With his emphasis on the future rather than the past, he exerts considerable effort at persuading his readers to accept his views. He makes no overt attempt, however, to proselytize his audience. Instead, he relies on more or less subliminal techniques for implanting “reasonable” opinions.
One of Voltaire's most subtle persuasive techniques involves citing “recognized” authorities with heterodox views whom he then opposes by weak assertions of established doctrine. In addition, Voltaire makes ample use of impersonal expressions and modal verbs to lend an air of self-evident authority to what are often highly debatable assertions. Il est clair, il est prouve, il est certain, etc., as well as the modal verbs falloir and pouvoir, are most often used to undermine the doctrines or social position of the Church. With this rhetorical technique, Voltaire implies that any objection to his statements is unthinkable.
The ability to distinguish among various modes of discourse in historical texts and to establish the dominance of one or the other in individual works of history will ultimately provide a new tool for developing a typology of historical discourse.
Review Essay: “‘The Critical Theory of Jörgen Habermas.’ By Thomas McCarthy.” History and Theory 18(1979): 397–417.
The though of Jörgen Habermas has both impressed and confounded social theorists in the Anglo-American tradition, since his theories overlap disciplinary boundaries. A crucial feature of Habermas's work has been the search for rational standards of criticism. With insights from psychoanalysis and hermeneutics, Habermas has produced a continuously evolving theory of rational discourse. His description of the legitimacy crisis afflicting “late capitalism” represents a concrete application of this developing teory of communication.
Habermas, like Freud, believes that selfconscious awareness of the deep roots of our conduct and relationships can contribute to greater coherence and realism in behavior. Freud, however, denied that the individual can bring all ingredients of his unconscious to conscious understanding and control. Even if this were possible, society would lose the many beneficial effects of cultural and individual repressions. Habermas, on the other hand, has not identified any necessary limits to a peson's self-understanding. Indeed, he seemingly wishes to preserve the Kantian notion of the purposefully aware subject, even while drawing upon the insights of Freudian theory. By a selective reading of Freud's writings, Habermas comes to recognize that primary childhood repressions put some basic limits on the ideal of the fully self-aware self. Nonetheless, he does not see this concession as a refutation of the ideal of discourse. After all, Freud himself arrived at the concept of these limits through a process of rational analysis. In Habermas's view, Freud's insight merely shortens the reach of the ideal, while giving to art a significant role in evoking the unconscious aspects of our psychic life.
In addition to Freud, Habermas pays tribute to hermeneutic or interpretive theorists such as Hans Gadamer and Peter Winch. These thinkers scrutinize the concepts and aspirations of those entering into social relations, and, in so doing, they penetrate deeply into the meaning and structure of such relations. An understanding of this intersubjective element in social interaction adds a new depth to the old empiricist notion of legitimacy. Since institutional practices arise partially from shared beliefs, the loss of social participants belief in those values will weaken their performance and even blur their conceptual precision.
Knowing that social beliefs are subject to constant reevaluation, interpretive theorists have denied that the constraints of reason and evidence will ultimately enable us to reduce the number of defensible interpretations of a social order to one. Habermas objects to this view by articulating a theory of communication based on the concept of “ideal speech.” The theory treats as “true” those statements which would gain the assent of all persons under specified conditions of communication which include: symmetrically distributed chances to enter into dialogue, the opportunity for all dialogue participants to question unexamined propositions, and the suspension of all motives except the desire to reach a rational conclusion. Such conditions would eventually result in a “consensual truth” arrived at by the community rather than the solitary seeker.
The practical value of this collective search through dialogue may be seen in the current “crisis of capitalism” in Western society. According to Habermas, the covert disaffection with the society of production, evidenced by deteriorating worker motivation, drug use, high divorce rates, etc., speaks for the adoption of some form of socialism in the West. However, socialist theorists have not convincingly dealt with the problems of freedom and democracy in their system.
In the face of this dilemma, numerous strains of thought from systems theory to Marxist positivism have tended to deny the possibility of an enlightened, self-conscious populace which might collectively elaborate a vision of socialism in freedom. In such an intellectual context, Habermas's ideal speech situation, the center-piece of his system, offers a significant theoretical and practical opportunity for renewing a stangating and inauthentic society.
The themes of militarism, tolerance for social and cultural diversity, and the insidious effects of political bureaucracy introduce us, in this group of summaries, to the critical role played by government and politics in determining whether mankind enjoys peace or suffers war. The remaining summaries represent case studies of the relationship between government and a foreign policy of expansionism and imperialism. Earlier treatments of similar issues of foreign policy and militarism are to be found in the October/December, 1979 and the Summer 1980 issues of Literature of Liberty.
“Militarism, its Dimensions and Corollaries: An Attempt at Conceptual Clarification.” Journal of Peace Research 16, No. 3(1979).
The term “militarism” is commonly used both for analytical and propaganda purposes to label and condemn widely differing phenomena. In the liberal tradition of the West, most writers emphasize the notion of excess when discussing militarism, while Marxists link the term directly to imperialsim and monopoly capitalism. Borrowing elements from both traditions, the author suggests that a discussion of the meaning of militarism can be organized along three dimensions: (1) the behavioral, (2) the attitudinal or ideological, and (3) the structural.
From one point of view, behavior is the most crucial dimension of militarism. Militarist social structure and ideology are worrisome mainly because they incite violent behavior. The behavioral dimension grows more complicated when one introduces the distinction between latent and actual use of violence. Brinksmanship and the policy of deterrence involve the skillful “nonuse” of military force. Obviously, both actual and latent use of violence may reach the point of excess. While no consensus has been reached as to the dividing line between appropriate and excessive violent behavior we can identify a number of points along a scale from Gandhian pacifism to genocide which provide a more convenient framework for discussion.
On the attitudinal level, the author stresses that, unlike natural catastrophes, wars result from human decision. Debate has long raged among researchers concerning which factors lead to decisions to resort to organized violence. The school of Konrad Lorenz relates aggression to a psychologically programmed fighting instinct, while Marxists maintain that violent attitudes and ideology derive from the aggression implicit in certain economic relations. Undoubtedly, individual values play a large role in disposing a person and groups of persons toward violent behavior. Studies have detected a high correlation between militarist attitudes and high scores on scales such as pessimism concerning human nature, extraversion (i.e., dependence on the social environment for opinions and motivations), misanthropy, social irresponsibility, lack of empathy, and egoism.
Turning to structural dimension, we observe that nation-states with their near monopoly on the legitimate use of force differ widely in the organizaiton of civilianmilitary relationships (level of military spending, consumption of natural resources, diversion of talent to military uses, control of military by civilian authorities, etc.). At the heart of the military-industrial complex in industrialized countries lies not only the common interests of business and military leaders, but also the political interests of politicians whose electorate depends on high military procurement. In undeveloped nations, a strong military is often considered essential to the maintenance of order and the bolstering of national pride and economic development. Powerful nations have nurtured these seeds of militarism in poorer countries in order to increase their own international influence. Transfer of arms and military technology, as well as the training of local personnel, have thus become tools of the strong for the subjection of the weak.
As a partial antidote to the militarist syndrome, Prof. Skjelsbaek recommends a vigorous educational campaign on the part of anti-militarist groups around the world. Before such an effort can be undertaken, however, a foundation of tolerance and cooperation must be established between groups which see a legitimate, though limited role for the military and those which reject all forms of organized violence regardless of circumstances.
“The Plural Society and the Western Political Tradition.” Canadian Journal of Political Science 12(December 1979): 675–688.
Population mobility in the Western world and a growing sense of ethnic identity raise the question of whether societies can successfully accommodate populations of diverse cultural traditions. What are the attitudes toward multicultural societies as reflected in Western political philosophy from Ancient Greece to the present day. The overview reveals a general lack of sympathy for cultural plurality throughout the Western tradition.
At the very beginning of that tradition, Aristotle was “locked into the world of the polis” and developed an ethnocentric view which opposed Greek and barbarian and allowed little room for creative interaction between the two. Such a perspective was consistent with the narrow political context in which the philosopher found himself. The conquests of Alexander, however, abruptly widened the horizons of the Greeks as they assumed control over radically different peoples and cultures.
From a political viewpoint, the Stoic school with its bold assertion of the unity of mankind represents the most important result of this Hellenistic expansion of cultural perspective. However, Stoic doctrine tended to discount or disregard cultural differences in order to emphasize mankind's common heritage under the rule of divine reason. As a result, the ancient world's most durable experiment in cultural coexistence, Imperial Rome, owed less to Stoic universalism than to a “long pragmatic accumulation of cultural contacts.”
During the Middle Ages, the vigorous cultural diversity of Europe evolved more because of the collapse of government machinery than from any firm commitment to pluralism. In addition, the Church's insistence on doctrinal orthodoxy undoubtedly gave impetus to attempts at imposing social uniformity. The near-destruction of Provencal culture during the Albigensian Crusade is a case in point.
While Reformation churches also required uniformity of belief, the plurality of religious doctrine in areas such as Switzerland encouraged a new tolerance for divergent opinions. Over a period of centuries, the spirit of toleration spread to most parts of the West. Thus, the Reformation Fathers unwittingly became the most potent catalysts for social pluralism in the Western tradition.
Nevertheless, countervailing forces were to be found in the secular world. The rise of the centralizing and homogenizing nation-state drew intellectual support from such thinkers as Bodin and Hobbes. Assuming a “similitude of passions” within the human race, Hobbes established a science of politics which as aimed at suppressing every major source of human variation, including individual temperament.
While the idea of community resurfaced in the eighteenth century among such divergent thinkers as Rousseau and Edmund Burke, the burgeoning nationalist movement quickly adopted it to help eliminate the pluralist political regimes of Central and Eastern Europe. The lone voice among liberals to question nationalist monoculturalism was that of Lord Acton. Acton saw the multicultural state as the best safeguard against the rise of despotism. In his view, its diverse elements would also assure the creativity and regeneration of the commonwealth.
Although the Western tradition has largely discouraged pluralism, the West may possess “untapped resources” in this area, as well as some capacity to adapt. As a systematic effort at analysis and synthesis, he suggests a closer scrutiny of Western elements favorable cultural diversity, a comparison of non-Western pluralist traditions, selective borrowing of material from psychology and sociology, and a systematic study of pragmatic responses to cultural pluralism. Without such a general examination of the question, McRae asserts, our discussions of the individual and society or of man and the state will prove empty, since we will have failed to take full account of the variety of mankind.
“On Some Contradictions of Socialist Society: The Case of Poland.” Soviet Studies 31(April 1979): 167–187.
As a Marxist living in Eastern Europe, Prof. Staniszkis has had a lifetime in which to observe the institutionalized inanities of life in a socialized society. In an unusually frank essay written in Warsaw for the University of Glasgow, this Polish intellectual presents a minute analysis of the self-defeating practices which characterize the often halting operations of her country's social, political, and economic system.
At the root of Poland's contradictory way of life, Prof. Staniszkis finds a bureaucracy which lacks legitimacy in the Weberian sense of the term. Instead of following disinterested rules and procedures by which it might acquire the sanction of the population, Polish bureacracy operates largely through charisma, constantly violating announced policy to take account of “exceptional cases.”
At the top of the administrative pyramid, the ruling group has arrogated to itself a secular infallibility, so that each new program is presented as “objectively true” and linked with “objective social laws.” Armed with this dogmatism, the ruling group has eliminated all self-regulatory mechanisms for testing its policies (such as the free market or other feedback loops). In cases where mistakes occur, they are often not corrected, since admission of error would compromise the omniscience of the bureaucracy and of the Communist party. Thus, official emphasis on the development of the production goods sector stubbornly persists even when a large percentage of Polish industrial equipment lays idle. The ever-expanding structure of enforced miscalculations has become an “artificial reality” to which everyone must pay lip service.
As a result of a system based partly on illusion, crisis has become a recurrent motif of life in Poland. Economic or political up-heavals induce the intransigent leadership and bureaucracy to “make reforms.” However, these are usually palliatives (such as clemency for protest leaders), which allow “artificial reality” to regain its absurd sway over the country.
Since the Polish state requires tangible signs of allegeance to a manifestly irrational system, an all-perfading hypocrisy saps the spiritual vitality of the nation. Among the middle class in this “classless” society”, superior language abilities have spawned a curious game. A manager or scientist may mouth the official, ritualized rhetoric, but with a certain debonnair irony which tells those in-the-know that he does not take these clichés seriously. This practice fosters the illusion of preserving one's personal integrity, but at the cost of depriving political and individual expression of the force of sincerity. The strange inarticulate quality of protest in Poland reveals the impotence of irony as a tool for achieving political reform.
Workers also recognize the emptiness of the official political vocabulary, but they lack the linguistic skill for the subtleties of irony. As a result, they often vent their frustrations by gratuitous acts of violence against the symbols and representatives of power. Despising the only political language they know, striking workers present mostly economic demands, which skirt the real problems of the working class.
Professor Staniszki's clumsily maintained status quo “leads not only to waste of the human and ecomonic potential of the system, but also to deep inner corruption and the corrosion of...ideology and the regime's legitimacy.” “My prognosis for the future,” she writes, “is not very optimistic.”
“U.S. Foreign Policy: The Revival of Interventionism.” Monthly Review 31(February 1980):15–27.
The “human rights” emphasis in President Carter's foreign policy has been a transitional policy aimed at overcoming domestic and foreign opposition to U.S. intervention. The loss of Vietnam and of the Indochina war demoralized public opinion and created distrust for the capabilities of America's foreign policymakers.
Carter's administration has projected a new set of “values” for the role of the U.S., the new means to the same ends of the Nixon/Ford administration: to develop an international network of loyal and supportive allies, to maintain a worldwide armed force with a capacity to defend economic interests, and the continued acquiesence of subordinate nations.
The Human Rights emphasis has been highly successful as a public relations policy: it has defused internal unrest by bringing the U.S. away from being a source of destruction. For example, the U.S. image has been transformed from being attacked for its crimes in Vietnam, to being the accuser of Vietnam for the cruel treatment of the “boat people.” She is now viewed as the virtuous humanitarian saving the fleeing refugees with welcome arms.
Thus the image of the United States as a moral leader has been rehabilitated. The renewed pursuit of “morality” has been put to service in reconstituting the U.S.’s interventionist capacity. Former critics in the Third World have been won over: similarly, improved relations with allies in Europe were strongly influenced by Carter's human rights emphasis.
As social upheavals continuously disturbed various countries, causing waves of unrest in neighboring states, the right wing's response has been a clamor for action. Carter has been hesitant to undermine the fragile internal consensus and has avoided any kind of largescale warfare. However, there has been a gradual shift toward selective involvement of U.S. interventionism and military aid. Specifically, since late 1978 human rights groups have been losing influence in Washington D.C. The military budget has increased, arm sales have increased, and new military aid programs have been implemented and increased in the Mid-East, in Central America, and in Southeast Asia.
The new aggressive posture of the U.S. towards Third World concerns have also served to distract attention from internal problems. In a recent presidential address President Carter blamed OPEC for America's unemployment, inflation, declining standards of living, and the energy crisis. Cleverly and subtly, Carter aroused a sense of U.S. nationalism by polarizing “us” vs. “them,” and gave new life to a growing hostility to the injustices visited upon “innocent” U.S. citizens. “Better to intervene against the greedy Arab oil millionaires than to wait in gas lines on Main Street.”
In short, U.S. foreign policy has gone a full circle from 1976, and the 80s promise to be a period of growing confrontation. As the Third World changes from being a passive “human rights” victim to being an active protagonist in social revolutions, the U.S. is shifting from being a critic of repression, to a promoter of economic and military intervention.
“Lessons of the Mexican War.” Pacific Historical Review 47(August 1978): 325–342.
Historians have scarcely recognized the dilemma of the Mexican War that faced President James Polk's administration in the year preceding the Treaty of Guadalupe Hidalgo in 1848. The President's policies achieved the goals Polk had in mind at the onset of the war—that of acquiring the territories of New Mexico and California, and extricating U.S. forces from Mexico. However, the success of the traty rested largely on the good fortune that Polk's agent, Nicholas Trist, disobeyed presidential orders to leave Mexico and negotiated the treaty with Mexican officials.
Polk was facing a divided United States that was becoming increasingly frustrated with the prolonged continuation of the war. The reasons for the public's support of United States involvement in the war were diverse ranging from a desire to a expand the vast territory of North America to fulfilling the supposed “White Man's Destiny” by socially and politically regenerating the “inferior Mexican race.” American anti-imperialists, however, questioned the Polk's integrity in allowing the war to continue without substantial victories, especially since the basic motivation of the war was imperialistic.
Despite the criticism against Polk, the general consensus agreed that there was no alternative except to excalate the war in order to hasten and ensure its end. The government of Mexico was in such shambles that surrender was given within six months.
In the Mexican War military means were more potent than diplomatic in compelling a backward nation that has neither political traditions nor material achievements to defend. But the war also proved that the transition from war to peace is far more achievable and permanent if the terms of peace are limited, tangible, and realistic.
“Spain: The Spanish Problem and the Imperial Myth.” Journal of Contemporary History 15(January 1980):5–25.
The year 1898 became known as the ‘Year of Disaster’ for Spain, as the Treaty of Paris brought to a head the ‘Spanish problem.’ At the signing of the Treaty, Spain surrendered the last shreds of a once vast overseas empire. Her navy lay in the bed of the Pacific or the Caribbean, sunk in two brief battles with almost contemptuous ease by the American fleet. The demolition of the Spanish illusion of grandeur threw a large part of the politically aware population into a state of shock and despair. The agonized reaction of so many Spaniards to the Disaster of 1898 contrasted sharply with the relative equanimity displayed towards the loss of the world's most extensive overseas empire, stretching from Cape Horn to the borders of present-day Canada, during the 1820s.
As Spain entered the nineteenth century, she was secure in her role as an imperial power, and just glimpsing the potential of industrialization. Here, says Blinkhorn, opens the room to doubt as to whether Spain's empire could ever have become the industrialized world power she perceived herself to be. Easy colonial wealth contributed to the Spanish bourgeoisie's persistent lack of economic enterprise and its ready acceptance of rural-aristocratic values. In summary, the Spanish rulers maintained a reactionary outlook in government clinging to illusions of days-gone-by, despite rapidly changing world affairs outside Spain. The full outcome of territorial loss of the 1820 revolution was never quite realized in Spain. Had the entire overseas empire been relinquished by 1830, it is at least conceivable not only that Spain's economic and social development might have been healthier. Given a decade or two, Spain might have adjusted more comfortably to a second-class status, which was finally and brutally forced upon her in 1898.
As Spain gradually came to accept that the independence of the Spanish-American mainland was an accomplished fact, her rulers' determination to cling fast to the Antilles and the Philippines only increased until it became a virtual obsession. From the 1860s the view prevailed in political circles that Spain's “greatness,” self-respect, prosperity, and internal peace depended in large part upon keeping her surviving colonies, and above all, Cuba.
This notion of Spain seeking to return to her former stature, when her governing elite clung to a fantasy of national greatness inspired by her imperial past and largely dependent upon her continuation as a colonial power, is central to any understanding of the malaise which was produced by the 1989 Disaster. It did not precipitate the downfall of the monarchy, but it did inject new life into Republicanism, which eventually seized power in 1931.
“United States Indian Policy and the Debate over Philippine Annexation: Implications for the Origins of American Imperialism.” The Journal of American History 66(March 1980):810–836.
The author argues against the conventional view that American imperialism began in 1898 (annexation of the Philippines and the Spanish-American war), on the grounds that our policy towards annexing the Philippines was set by our treatment of the American Indians. Imperialists themselves made this argument, and Williams suggests historians would do well to take this view seriously.
Williams claims that the Indians were colonized. They were culturally different people who, because of their dissimilarities with most Americans, were not incorporated into the political process but were enveloped by the United States without being given citizenship rights. Such colonization began with a change in Indian status from being regarded as a sovereign nation (as defined by treaty), to that of “domestic dependent nations” or “wards” (as defined by Chief Justice Marshall). By 1871, Congress stopped making treaties with the Indians, and the Supreme Court ruled that Congress could override an old treaty simply by statute. By 1885, the Court held that the Indians were only “local dependent communities” and that those born on the reservations were not granted citizenship rights as defined by the 14th Amendment. By the end of the century, the federal government had virtually unlimited power over the Indians. They were powerless subjects with no rights or treaty guarantees that the government had to respect.
This colonial status of the Indians was used by the imperialist as a model for the alien subject people overseas (in the Philippines). Imperialism abroad was compared to expansionism at home, and since the imperialists favored the former, the stage was set for the argument that annexing the Philippines involved continuity with the past. Since expansionism in the United States meant progress, which in turn meant conquering the Indians, then it followed that incorporation of non-contiguous people as alien subjects was not a dramatic change.
The imperialist argument goes as follows: (1) Alaska was noncontiguous, and neither that territory nor the Indian or New Mexican territories would be states unless Anglo Saxons had populated them. (2) The distance of the Philippines created no problem for government control. When California was annexed in 1848, it was less accessible than the Philippines were in the early twentieth century (given modern technology). Furthermore, the western territories were “colonies” and no different from overseas territories; that is, in both cases Congress had supreme and total power to do with them what they pleased.
The anti-imperialist argued against annexation by insisting that the American form of government involved consent by the governed—hardly the case with the Philippines Islands. The imperialists refuted that statement by using the Indian colonization as a precedent. Most anti-imperialists accepted nonconsensual expansion in Indian lands. As Henry Cabot Lodge, a leading imperialist, commented, if the anti-imperialists are right, then “our whole past record of expansionism is a crime.” Since most anti-imperialists did not view expansionism as a crime, they either denied the Indian analogy or ignored the issue entirely.
Another comparison between the treatment of the Indians and the Filipinos can be seen in both the voting record on Indian policy and foreign imperialism as well as the rhetoric in the Congressional debates. In the former case, there is a strong correlation between Congressmen's votes on Indians and foreign annexation. In the latter case, the rhetoric of civilization versus barbarism was quite strong, as was the belief that both Indians and Filipinos were barbarians. Imperialist rhetoric also affected the 1899–1902 war, which was viewed by most soldiers and officers as an Indian war. Most soldiers and officers who fought in the war had fought in the Indian wars in the west. Because of this, the U.S. army was probably more prepared to fight a guerrilla war at that time than any time subsequently.
Given all the above information, Williams concludes that the events in 1898–1902 did not involve a noted departure in the foreign affairs behavior of the American government.
“Filipino Resistance to American Occupation; Batangas, 1899–1902. Pacific Historical Review 48(November 1979):531–556.
The author discusses Filipino reaction to the Philippine-American war in 1899–1902, specifically in the province of Batangas. The Batanguenos had just become free from alien rule by defeating the Spanish in mid-1898 when war broke out between America and Aguinaldo's (president of the Philippine Republic) forces. The Batanguenos were led by Michael Malvar, a wealthy landowner, businessman, and government official. Most of Malvar's support came from the elite, who were educated abroad in Spanish schools and thus imbibed European liberal nationalist Lockean/Rousseauian ideas. The masses were less enthusiastic, as most were drafted into Malvar's army, and generally gave less zealous support to the cause.
For the first year of the war, fighting took place in provinces outside of Batanga. However, in January of 1900, the Americans invaded and conquered the municipalities, and Malvar's forces headed for the mountains and the outlying areas. From then on, Malvar's forces engaged in guerrilla warfare against overwhelming odds.
To be successful, Malvar needed noncombatant support. In the first year, covert resistance to American rule was widespread, particularly among the elite, and instances of collaboration with the Americans were rare. Most civilians, however, just tried to survive as best they could. By 1901, however, the enthusiasm of Malvar's forces began to wane, collaboration with the enemy increased, and President Aguinaldo was captured. Still, Malvar kept the resistance alive (partly by threatening the people in the areas he controlled).
However, the American army ultimately prevailed. Though the invaders originally employed nonmilitary methods (e.g., setting up schools and municipal governments), the policy changed to one of severe military tactics as it became clear that nonmilitary methods weren't working. Americans found it difficult to distinguish noncombatants from guerrillas and came to despise all Filipinos, civilians, and guerrillas. American forces tortured suspected supporters of Malvar and burned the barriers from which attacks on the Americans emanated. Later in the war, when Brigadier General Bell (a veteran of the Indian wars) took over, American methods became even harsher. Bell established his “concentration” policy designed to insulate the guerrillas from the non-combatants and deprive the former of food. In each town in Batanga, Americans established “zones” in which residents were guaranteed “protection” against attacks by Malvar's forces. Outside of these zones, property was destroyed or confiscated, travel was forbidden except by those with special passes, and those who refused to enter the zones were tortured. In addition, Bell ordered the arrest of all known supporters of Malvar and punished all those who refused to cooperate. Bell's policies led to widespread suffering: food was scarce in the zone, thousands died of disease or starvation, and many areas outside the zones were reduced to rubble, but the methods helped to win the war for America.
Though scattered opposition to American rule continued through 1910, most Batanguenos accommodated themselves to the Americans. This was due, says the author, to these factors: first, they had little choice since ousting the Americans by force seemed hopeless; second, American schools provided more opportunities for some Filipinos; third, Americans provided benefits such as sanitation, roads, food relief, etc.; fourth, by permitting suffrage only to Filipinos with wealth, education, and previous governmental experience, the Americans kept the elite dominant in the new Philippines.
The author concluded by noting that the Batanguenos' response differed from that discussed by Teodoro Agoncillo in his study of resistance in Manila and Cavite. Agoncillo saw the elite collaborating, and the masses resisting, while in Batangas such a rigid opposition did not exist and the elites tended to be more hostile to American rule. Nor does the Batanguenos' resistance fit the pattern in Pampanga, as described by John Larkin. There the populace tried to maintain good relations with both the Americans and the guerrillas, and accepted American rule after Aguinaldo was captured. In Batanga, the resistance was stronger and lasted until the very end. Batanguenos' opposition was stronger because they were preparing for the invasion up until 1900. Malvar was a competent military leader, the people were the same ethnic composition as the Filipino republic leaders, and the elite's contribution helped promote tenacious resistance.
“World War I: European Origins and American Intervention.” The Virginia Quarterly Review 56(Winter 1980):1–18.
World War I has often been compared to “Armageddon,” the nation-shattering miracle preceding the Last Judgment in the book of Revelation. The suddenness and magnitude of the conflict was the first time in history that the destructive deeds of man matched the disasters of nature.
Though the actual outbreak of complete war was sudden, tensions in the European continent had been brewing for years. The war sprang from two related breakdowns in mankind's proudest creation at the beginning of the twentieth century, the highly civilized nation-states of Europe. The countries were experiencing severe problems with relations among themselves. All the main European powers held grudges against each other for the control of territories and populations, imperial colonies in Africa and Asia, and assertions of political and economic influence.
Germany bore the heaviest responsibilities in encouraging tensions among nations by fomenting discord among rivals through imperialist crises in the Far East, Africa, and the Balkans. German actions reflected a reckless desire for expansionism. German leaders felt their destiny as a “world state with a world mission” would only be fulfilled by a “coming world war.” However, other nations were quick to react to Germany's aggression. These European countries were experiencing domestic strife that made the war a welcome relief to lay aside domestic troubles.
Britain was facing the growing militarism of the Labour Party, the rising voice of women suffrage, and an incipient civil war over Irish autonomy. The Socialists vs. the Nationalists were coming to heads in France; Germany's Social Democrats were emerging as the strongest single party and war seemed the only means to curb them.
Only Russia was gaining internal stability, due to her massive industrialization and sweeping land reforms. World War I did not cause these internal breakdowns, though it did accelerate the conditions. The United States viewed the European war with detachment, being geographically and morally removed from the conflicts. The sinking of the Lusitania raised the question of intervention for the first time, but Wilson maintained his policy of remaining neutral and seeking peace.
However, the Germans seemed unable to conceive of any other course except riding the war to total victory, and ignored U.S. demands to cease submarine warfare.
Wilson had two concerns for the escalating war: to end the conflict and to prevent any recurrence of such a war. He finally chose to intervene since the financial collapse for European allies was iminent. Belligerency seemed to Wilson the only means to a final and lasting peace among nations.
American intervention revolutionized World War I by saving the Allies from financial ruin; lending a moral boost to hold the Western front; and supplying fresh manpower for the final counteroffensive which ended the war on November 11, 1918. The European war became a global conflict by drawing in the Western Hemisphere and extending connections into the Pacific.
The far reaching effects of World War I were to take the focus off territorial appetites and imperial designs of nations and to explore new ways to conduct relations among nations by attempting to create an international order.
The decision by Wilson to intervene set the tone for the U.S. world role for the twentieth century. Despite Wilson's intentions, the war turned into a self-righteous crusade which confirmed a dangerous predilection in U.S. conduct of world affairs. “Thanks to him [Wilson] and to the long-running aftereffects of World War I, the United States has tried again and again to shape events that have seemed to others beyond human control. That has been America's glory and tragedy.”
a book series examining the complex phenomena of human behavior in relation to the dynamic market process.
Principles of Economics
by Carl Menger
The Foundations of Modern Austrian Economics
Edited by Edwin G. Doian
Epistemological Problems of Economics
by Ludwig von Mises
L.S.E. Essays on Cost
by J. M. Buchanan and G. F. Thirlby
The Ultimate Foundation of Economic Science
by Ludivig von Mises
Economics as a Coordination Problem
by Gerald P. O'DriscoIl, Jr.
America's Great Depression
by Murray N. Rothbard
Power and Market: Government and the Economy
by Murray N. Rothbard
Capital, Interest, and Rent
by Frank A. Fetter
The Economic Point of View
by Israel M. Kirzner
Capital and its Structure
by Ludwig M. Lachmann
Capital, Expectations, and the Market Process
by Ludwig M. Lachmann
by Ludwig von Mises
The Economics of Ludwig von Mises
Edited with an introduction by Laurence S. Moss
Man, Economy, and State
by Murray N. Rothbard
New Directions in Austrian Economics
Edited by Louis M. Spadaro
Individual copies available:
Cloth $15.00 each; Paper $5.00
Set: Cloth $210.00; Paper $70.00
Inotitnte for Humane Studiei
P.O. Box 2250
Wichita, Kansas 87201