Forms of Dishonesty: Lying, deliberate deception, withholding information, failure to seek out the truth. Why dishonesty wrong? With regard to professionals, the requirement never to conceal truth would mean that engineers, physicians, lawyers, and other professionals could not exercise confidentiality or protect proprietary information. Doctors could never misrepresent the truth to their patients, even when there is strong evidence that this is what the patients prefer and that the truth could be devastating. Engineers have some degree of responsibility to ensure that employers, clients, and the general public make autonomous decisions, but their responsibilities are more limited than those of physicians. Their responsibilities probably extend only to the third of these three conditions of autonomy, ensuring that employers, clients, and the general public make decisions regarding technology with understanding, particularly understanding of their consequences. Now let us turn to the utilitarian perspective on honesty. Utilitarianism requires that our actions promote human happiness and well-being. The profession of engineering contributes to this utilitarian goal by providing designs for the creation of buildings, bridges, chemicals, electronic devices, automobiles, and many other things on which our society depends. It also provides information about technology that is important in decision making at the individual, corporate, and public policy levels. Dishonesty in engineering research can undermine these functions. If engineers report data falsely or omit crucial data, then other researchers cannot depend on their results. This can undermine the relations of trust on which a scientific community is founded. Just as a designer who is untruthful about the strength of materials she specifies for a building threatens the collapse of the building, a researcher who falsifies the data reported in a professional journal threatens the collapse of the infrastructure of engineering. Dishonesty can also undermine informed decision making. Managers in both business and government, as well as legislators, depend on the knowledge and judgments provided by engineers to make decisions. If these are unreliable, then the ability of those who depend on engineers to make good decisions regarding technology is undermined. To the extent that this happens, engineers have failed in their obligation to promote the public welfare. Dishonesty in Engineering Research and Testing: Dishonesty in science and engineering takes several forms: falsification of data, fabrication of data, and plagiarism. Falsification involves distorting data by smoothing out irregularities or presenting only those data which fit one’s favored theory and discarding the rest. Fabrication involves inventing data and even reporting results of experiments that were never conducted. Plagiarism is the use of the intellectual property of others without proper permission or credit. It takes many different forms. Plagiarism is really a type of theft. Confidentiality: One can misuse the truth not only by lying or otherwise distorting or withholding it but also by disclosing it in inappropriate circumstances. Engineers in private practice might be tempted to disclose confidential information without the consent of the client. Information may be confidential if it is either given to the engineer by the client or discovered by the engineer in the process of work done for the client. Given that most engineers are employees, a more common problem involving the improper use of information is the violation of proprietary information of a former employer. Using designs and other proprietary information of a former employer can be dishonest and may even result in litigation. Even using ideas one developed while working for a former employer can be questionable, particularly if those ideas involve trade secrets, patents, or licensing arrangements. An engineer can abuse client–professional confidentiality in two ways. First, she may break confidentiality when it is not warranted. Second, she may refuse to break confidentiality when the higher obligation to the public requires it. Intellectual Properties Intellectual property is property that results from mental labor. It can be protected in several ways, including as trade secrets, patents, trademarks, and copyrights. Expert Witnessing Engineers are sometimes hired as expert witnesses in cases that involve accidents, defective products, structural defects, and patent infringements, as well as in other areas where competent technical knowledge is required. Calling upon an expert witness is one of the most important moves a lawyer can make in such cases, and engineers are usually well compensated for their testimony. However, being an expert witness is time-consuming and often stressful. When confronted with these demands, the expert witness faces certain ethical pitfalls. The most obvious is perjury on the witness stand. A more likely temptation is to withhold information that would be unfavorable to the client’s case. In addition to being ethically questionable, such withholding can be an embarrassment to the engineer because cross-examination often exposes it. To avoid problems of this sort, an expert should follow several rules First, she should not take a case if she does not have adequate time for a thorough investigation. Second, she should not accept a case if she cannot do so with good conscience. Third, the engineer should consult extensively with the lawyer so that the lawyer is as familiar as possible with the technical details of the case and can prepare the expert witness for cross-examination. Fourth, the witness should maintain an objective and unbiased demeanor on the witness stand. Fifth, the witness should always be open to new information, even during the course of the trial.
Informing the Public Some types of professional irresponsibility in handling technical information may be best described as a failure to inform those whose decisions are impaired by the absence of the information. From the standpoint of the ethics of respect for persons, this is a serious impairment of moral agency. The failure of engineers to ensure that technical information is available to those who need it is especially wrong where disasters can be avoided. Conflict of Interest: can strike at the heart of professionalism. This is because professionals are paid for their expertise and unbiased professional judgment in pursuing their professional duties, and conflicts of interest threaten to undermine the trust that clients, employers, and the public place in that expertise or judgment. When a conflict of interest is present, there is an inherent conflict between a professional actively pursuing certain interests and carrying out his or her professional duties as one should. In considering these prohibitions and conflicts of interest more generally, however, several important points must be kept in mind. First, a conflict of interest is not just any set of conflicting interests. Second, simply having more commitments than one can satisfy in a given period of time is not a conflict of interest. Third, the interests of the client, employer, or public that the engineer must protect are restricted to those that are morally legitimate. Fourth, a distinction is sometimes made between actual and potential conflicts of interest. Fifth, even though it is best to avoid conflicts of interest, sometimes this cannot reasonably be done. Even then, the professional should reveal the existence of the conflict rather than wait for the customer or the public to find out about it on their own. The relationship of safety to risk is an inverse one. Because of the laws of engineering science and statistics, the more we accept risk in an engineering project, the less safe it will become. If there were absolutely no risk in a project, then that project would be absolutely safe. So safety and risk are intimately connected. Concern for safety pervades engineering practice. One of the most common concepts in engineering practice is the notion of ‘‘factors of safety. Virtually all engineering codes give a prominent place to safety, stating that engineers must hold paramount the safety, health, and welfare of the public.
Approach to Risk Risk as the product of the probability and magnitude of harm: To assess a risk, an engineer must first identify it. To identify a risk, an engineer must first know what a risk is. The usual engineering definition of risk is ‘‘a compound measure of the probability and magnitude of adverse effect.’’5 That is, risk is composed of two elements: the likelihood of an adverse effect or harm and the magnitude of that adverse effect or harm. By compound is meant the product. Risk, therefore, is the product of the likelihood and the magnitude of harm. A relatively slight harm that is highly likely might constitute a greater risk to more people than a relatively large harm that is far less likely. We can define a harm as an invasion or limitation of a person’s freedom or wellbeing. Engineers have traditionally thought of harms in terms of things that can be relatively easily quantified, namely as impairments of our physical and economic wellbeing. Utilitarianism and Acceptable risk: In order to determine whether a risk is morally acceptable, engineers and risk experts usually look to utilitarianism. This position holds, it will be remembered, that the answer to any moral question is to be found by determining the course of action that maximizes well-being. One way of implementing this account of acceptable risk is by means of an adaptation of cost–benefit analysis. As we have seen, utilitarians sometimes find cost–benefit analysis to be a useful tool in assessing risk. In applying this method to risk, the technique is often called risk–benefit analysis because the ‘‘cost’’ is measured in terms of the risk of deaths, injuries, or other harms associated with a given course of action. For simplicity, however, we shall continue to use the term cost–benefit analysis. Limitations - First, it may not be possible to anticipate all of the effects associated with each option. Insofar as this cannot be done, the cost–benefit method will yield an unreliable result. Second, it is not always easy to translate all of the risks and benefits into monetary terms. Third, cost–benefit analysis in its usual applications makes no allowance for the distribution of costs and benefits. Fourth, the cost–benefit analysis gives no place for informed consent to the risks imposed by technology.
Expanding the Engineering Account of Risk: The Capabilities Approach to Identifying Harm and Benefit There are, however, four main limitations with this rather narrow way of identifying harm. First, often only the immediately apparent or focal consequences of a hazard are included, such as the number of fatalities or the number of homes without electricity. However, hazards can have auxiliary consequences, or broader and more indirect harms to society. Second, both natural and engineering hazards might create opportunities, which should be accounted for in the aftermath of a disaster. Focusing solely on the negative impacts and not including these benefits may lead to overestimating the negative societal consequences of a hazard. Third, there remains a need for an accurate, uniform, and consistent metric to quantify the consequences (harms or benefits) from a hazard. Fourth, current techniques do not demonstrate the connection between specific harms or losses, such as the loss of one’s home and the diminishment of individual or societal well-being, and quality of life. Yet it is surely the larger question of effect on quality of life that is ultimately at issue when considering risk. Specific capabilities are defined in terms of functionings, or what an individual can do or become in his or her life that is of value. Examples of functionings are being alive, being healthy, and being sheltered. A capability is the real freedom of individuals to achieve a functioning, and it refers to the real options he or she has available. Capabilities are constituent elements of individual well-being. Capabilities are distinct from utilities, which refers to the mental satisfaction, pleasure, or happiness of a particular individual. Often, people’s preferences or choices are used to measure satisfaction. Utilities are assigned to represent a preference function. From the capabilities standpoint, a risk is the probability that individuals’ capabilities might be reduced due to some hazard. In determining a risk, the first step is to identify the important capabilities that might be damaged by a disaster. Next, the indicators must be scaled onto a common metric so that the normalized values of the indicators can be compared. Then, a summary index is constructed by combining the information provided by each normalized indicator, creating a hazard index (HI). Finally, in order to put the HI into the relevant context, its value is divided by the population affected by the hazard, creating the hazard impact index, which measures the hazard impact per person.
The Public’s approach to risk: Expert and Layperson: Differences in Factual Beliefs The first difference is that engineers and risk experts believe that the public is sometimes mistaken in estimating the probability of death and injury from various activities or technologies. Risk expert Chauncey Starr has a similarly low opinion of the public’s knowledge of probabilities of harm. He notes that people tend to overestimate the likelihood of low-probability risks associated with causes of death and to underestimate the likelihood of high-probability risks associated with causes of death. The latter tendency can lead to overconfident biasing, or anchoring. In anchoring, an original estimate of risk is made—an estimate that may be substantially erroneous. Even though the estimate is corrected, it is not sufficiently modified from the original estimate. The original estimate anchors all future estimates and precludes sufficient adjustment in the face of new evidence. “Risky” Situations and Acceptable Risk: It does appear to be true that the engineer and risk expert, on the one hand, and the public, on the other hand, differ regarding the probabilities of certain events. The major difference, however, is in the conception of risk itself and in beliefs about acceptable risk. One of the differences here is that the public often combines the concepts of risk and acceptable risk—concepts that engineers and risk experts separate sharply. Furthermore, public discussion is probably more likely to use the adjective “risky” than the noun “risk”.
In public discussion, the use of the term risky, rather than referring to the probability of certain events, more often than not has the function of a warning sign, a signal that special care should be taken in a certain area. Two issues in the public’s conception of risk and acceptable risk have special moral importance: free and informed consent and equity or justice. These two concepts follow more closely the ethics of respect for persons than utilitarianism. According to this ethical perspective, as we have seen, it is wrong to deny the moral agency of individuals. Moral agents are beings capable of formulating and pursuing purposes of their own. Free and Informed Consent: To give free and informed consent to the risks imposed by technology, three things are necessary. First, a person must not be coerced. Second, a person must have the relevant information. Third, a person must be rational and competent enough to evaluate the information. Meaningful and informed consent has been given is not always easy, for several reasons. First, it is difficult to know when consent is free. Second, people are often not adequately informed of dangers or do not evaluate them correctly. Third, it is often not possible to obtain meaningful informed consent from individuals who are subject to risks from technology. Equity or Justice: The ethics of respect for persons places great emphasis on respecting the moral agency of individuals, regardless of the cost to the larger society. The benefits and harms have been inequitably distributed. His rights to bodily integrity and life were unjustly violated. An acceptable risk is one in which (1) risk is assumed by free and informed consent, or properly compensated, and in which (2) risk is justly distributed, or properly compensated. The Government regulator’s approach to risk: Regulators could decide to regulate only when there is a provable connection between a substance and some undesirable effect such as a risk of cancer. Because of the difficulties in establishing the acceptable levels of exposure to toxic substances at which there is no danger, this option would expose the public to unacceptable risks. On the other hand, regulators could eliminate any possible risk insofar as this is technologically possible. Choosing this option would result in the expenditure of large sums of money to eliminate minute amounts of any substance that might possibly pose risks to human beings. This would not be cost-effective. Funds might better be spent elsewhere to eliminate much greater threats to public health
Communicating risk and public policy Communicating risk to public
Engineers are most likely to adopt the risk expert’s approach to risk. They define risk as the product of the magnitude and likelihood of harm and are sympathetic with the utilitarian way of assessing acceptable risk. The professional codes require engineers to hold paramount the safety, health, and welfare of the public, so engineers have an obligation to minimize risk. However, in determining an acceptable level of risk for engineering works, they are likely to use, or at least be sympathetic with, the cost–benefit approach. The lay public comes to issues of risk from a very different approach. Although citizens sometimes have inaccurate views about the probabilities of harms from certain types of technological risks, their different approach cannot be discounted in terms of simple factual inaccuracies. Part of the difference in approach results from the tendency to combine judgments of the likelihood and acceptability of risk. (The term risky seems to include both concepts.) More important, the lay public considers free and informed consent and equitable distribution of risk (or appropriate compensation) to be important in the determination of acceptable risk. Finally, the government regulator, with her special obligation to protect the public from undue technological risks, is more concerned with preventing harm to the public than with avoiding claims for harm that turn out to be false. This bias contrasts to some extent with the agendas of both the engineer and the layperson. Although, as a government regulator, she may often use cost–benefit analysis as a part of her method of determining acceptable risk, she has a special obligation to prevent harm to the public, and this may go beyond what cost–benefit considerations require. On the other hand, considerations of free and informed consent and equity, while important, may be balanced by cost–benefit considerations. What, then, is the professional obligation of engineers regarding risk? One answer is that engineers should continue to follow the risk expert’s approach to risk and let public debate take care of the wider considerations.
In light of this, we propose the following guidelines for engineers in risk communication; 1. Engineers, in communicating risk to the public, should be aware that the public’s approach to risk is not the same as that of the risk expert. In particular, “risky” cannot be identified with a measure of the probability of harm. Thus, engineers should not say ‘‘risk’’ when they mean “probability of harm.” They should use the two terms independently. 2. Engineers should be wary of saying, “There is no such thing as zero risk.” The public often uses “zero risk” to indicate not that something involves no probability of harm but that it is a familiar risk that requires no further deliberation. 3. Engineers should be aware that the public does not always trust experts and that experts have sometimes been wrong in the past. Therefore, engineers, in presenting risks to the public, should be careful to acknowledge the possible limitations in their position. They should also be aware that laypeople may rely on their own values in deciding whether or not to base action on an expert’s prediction of probable outcomes. 4. Engineers should be aware that government regulators have a special obligation to protect the public, and that this obligation may require them to take into account considerations other than a strict cost–benefit approach. Although public policy should take into account cost–benefit considerations, it should take into account the special obligations of government regulators. 5. Professional engineering organizations, such as the professional societies, have a special obligation to present information regarding technological risk. They must present information that is as objective as possible regarding probabilities of harm. They should also acknowledge that the public, in thinking about public policy regarding technological risk in controversial areas (e.g., nuclear power), may take into consideration factors other than the probabilities of harm. A major theme in these guidelines is that engineers should adopt a critical attitude toward the assessment of risk. This means that they should be aware of the existence of perspectives other than their own. The critical attitude also implies that they should be aware of the limitations in their own abilities to assess the probabilities and magnitude of harms.
Limitations in Detecting Failure Modes: One of the methods for assessing risk involves the use of a (1) fault tree. A failure mode is a way in which a structure, mechanism, or process can malfunction. Here, we reason forward from a hypothetical event to determine what consequences this hypothetical event might have and the probabilities of these consequences.
Still another factor that increases risk and also decreases our ability to anticipate harm is increasing the allowable deviations from proper standards of safety and acceptable risk. Sociologist Diane Vaughn refers to this phenomenon as (2) the normalization of deviance. Every design carries with it certain predictions about how the designed object should perform in use. Sometimes these predictions are not fulfilled, producing what are commonly referred to as anomalies. Rather than correcting the design or the operating conditions that led to anomalies, engineers or managers too often do something less desirable. They may simply accept the anomaly or even increase the boundaries of acceptable risk. Sometime this process can lead to disaster.
The Engineer’s Liability for Risk Another issue that raises ethical and professional concerns for engineers regards legal liability for risk. There are at least two issues here. One is that the standards of proof in tort law and science are different, and this produces an interesting ethical conflict. Another issue is that in protecting the public from unnecessary risk, engineers may themselves incur legal liabilities.
The Standards of Tort Law The standard of proof in tort law is the preponderance of evidence, meaning that there is more and better evidence in favor of the plaintiff than the defendant. The plaintiff must show (1) that the defendant violated a legal duty imposed by the tort law, (2) that the plaintiff suffered injuries compensable in the tort law, (3) that the defendant’s violation of legal duty caused the plaintiff’s injuries, and (4) that the defendant’s violation of legal duty was the proximate cause of the plaintiff’s injuries. In many cases, scientific knowledge is simply not adequate to determine casual relationships, and this would work to the disadvantage of the plaintiffs. There are also problems with encouraging judges to take such an activist role in legal proceedings. The major ethical question, however, is whether we should be more concerned with protecting the rights of plaintiffs who may have been unjustly harmed or with promoting economic efficiency and protecting defendants against unjust charges of harm.
Protecting Engineers from Liability The apparent ease with which proximate cause can be established in tort law may suggest that the courts should impose a more stringent standard of acceptable risk. But other aspects of the law afford the public less protection than it deserves. Engineers in private practice may face especially difficult considerations regarding liability and risk, and in some cases they may need increased protection from liability.
Becoming a responsible engineer regarding risk To formulate this principle, let us consider further some of the legal debate about risk. The law seems to be of two minds about risk and benefits. On the one hand, some laws make no attempt to balance the two. Given these considerations, we can construct a more general principle of acceptable risk, which may provide some guidance in determining when a risk is within the bounds of moral permissibility: People should be protected from the harmful effects of technology, especially when the harms are not consented to or when they are unjustly distributed, except that this protection must sometimes be balanced against (1) the need to preserve great and irreplaceable benefits, and (2) the limitation on our ability to obtain informed consent. The principle does not offer an algorithm that can be applied mechanically to situations involving risk. Many issues arise in its use; each use must be considered on its own merits. We can enumerate some of the issues that arise in applying the principle. First, we must define what we mean by ‘‘protecting’’ people from harm. Second, many disputes can arise as to what constitutes a harm. Is having to breathe a foul odor all day long harm? Third, the determination of what constitutes a great and irreplaceable benefit must be made in the context of particular situations. Fourth, we have already pointed out the problems that arise in determining informed consent and the limitations in obtaining informed consent in many situations. Fifth, the criterion of unjust distribution of harm is also difficult to apply. Some harms associated with risk are probably unjustly distributed. Sixth, an acceptable risk at a given point in time may not be an acceptable risk at another point in time. |