Tuesday , May 21 2024
Home / Mike Norman Economics / Missy Cummings — We need to overcome AI’s inherent human bias

Missy Cummings — We need to overcome AI’s inherent human bias

Summary:
Like I've been saying. GIGO. This is somewhat similar to the systems/game theoretical approach to military strategy that was highly popular with strategists affecting policy at the time of the Vietnam War. World Economic ForumWe need to overcome AI's inherent human bias Missy Cummings | Director, Humans & Autonomy LaboratorySee also Justifiably, there is a growing debate on the ethics of AI use. How do we roll out AI-based systems that cannot reason about some of the ethical conundrums that human decision-makers need to weigh – issues such as the value of a life and ending deep-seated biases against under-privileged groups? Some even propose halting the rollout of AI before we have answered these tough questions.I would argue that it’s not acceptable to reject today’s AI due to

Topics:
Mike Norman considers the following as important: ,

This could be interesting, too:

Mike Norman writes The gorilla problem — Diane Coyle

Mike Norman writes HBR — Survey: 68% of CEOs Admit They Weren’t Prepared for the Job

Mike Norman writes Asia Times — Alibaba developing advanced chip for AI computing

Mike Norman writes Dean Baker — Morning Edition Tells Us That Most Workers Think Like Most Economists and Don’t Worry About Automation


Like I've been saying. GIGO.

This is somewhat similar to the systems/game theoretical approach to military strategy that was highly popular with strategists affecting policy at the time of the Vietnam War.
Missy Cummings | Director, Humans & Autonomy Laboratory

See also

Justifiably, there is a growing debate on the ethics of AI use. How do we roll out AI-based systems that cannot reason about some of the ethical conundrums that human decision-makers need to weigh – issues such as the value of a life and ending deep-seated biases against under-privileged groups? Some even propose halting the rollout of AI before we have answered these tough questions.

I would argue that it’s not acceptable to reject today’s AI due to perceived ethical issues. Why? Ironically, I believe it might be unethical to do so.

Greater good

At its core, there is a “meta ethics” issue here.

How can we advocate halting the deployment of a technology solely because of a small chance of failure, when we know that AI technologies harnessed today could definitely save millions of people?
The basis of utilitarian consequential ethics is "utility." "Good" is defined in terms of the greatest good for the greatest number.

Deontological ethics is rule-based. Kantian deontological ethics is based on the rule of making decisions based on whether the action can be generalized as principle, which is a philosophical way of stating the Golden Rule.

Virtue ethics is based on a constellation of virtues that do not necessarily align. Practical wisdom must be applied as the criterion of reason.

Moral sentiments theories like those of David Hume and Adam Smith in The Theory of Moral Sentiments are based on a moral sensibility or refined feeling.

Situational ethics denies a universal approach to ethical decision-making in that every case is a special case and needs to be approached as such.

Situational ethics, or situation ethics, takes into account the particular context of an act when evaluating it ethically, rather than judging it according to absolute moral standards. In situation ethics, within each context, it is not a universal law that is to be followed, but the law of love. A Greek word used to describe love in the Bible is "agape". Agape is the type of love that shows concern about others, caring for them as much as one cares for oneself. Agape love is conceived as having no strings attached to it and seeking nothing in return; it is a totally unconditional love. Proponents of situational approaches to ethics include Kierkegaard, Sartre, de Beauvoir, Jaspers, and Heidegger.
Specifically Christian forms of situational ethics placing love above all particular principles or rules were proposed in the first half of the twentieth century by Rudolf Bultmann, John A. T. Robinson, and Joseph Fletcher. These theologians point specifically to agapē, or unconditional love, as the highest end. Other theologians who advocated situational ethics include Josef Fuchs, Reinhold Niebuhr, Karl Barth, Emil Brunner, Dietrich Bonhoeffer, and Paul Tillich.  Tillich, for example, declared that "Love is the ultimate law."
Fletcher, who became prominently associated with this approach in the English-speaking world due to his book (Situation Ethics), stated that "all laws and rules and principles and ideals and norms, are only contingent, only valid if they happen to serve love" in the particular situation, and thus may be broken or ignored if another course of action would achieve a more loving outcome. Fletcher has sometimes been identified as the founder of situation ethics, but he himself refers his readers to the active debate over the theme that preceded his own work.
Perennial Wisdom is in agreement with "the law of love" as supreme while also emphasizing that there are categories of mutual upholding, such that different conditions result in different responsibilities independently of specific contexts and circumstances. For example, parents responsibility is to provide first for their own families; citizens responsibility is first to their own countries.

Ethical dilemmas should not halt the rollout of AI. Here’s why

Kartik Hosanagar | Professor, The Wharton School, University of Pennsylvania
Mike Norman
Mike Norman is an economist and veteran trader whose career has spanned over 30 years on Wall Street. He is a former member and trader on the CME, NYMEX, COMEX and NYFE and he managed money for one of the largest hedge funds and ran a prop trading desk for Credit Suisse.

Leave a Reply

Your email address will not be published. Required fields are marked *