As a discipline, ethics involves systematizing, defending, and recommending concepts of right and wrong conduct by using conceptual analysis, thought experiments, and argumentation.
-
Subfields of Ethics
- Meta-ethics: Explores ethical concepts (principles of right or wrong), ontology(existence and relation) and epistemology (knowledge)
- Normative ethics: Explores practical means of determining ethical behavior
- Applied ethics: Explores the specific obligations and permissions for moral agents in specific situations or domains
-
Concerns
- Immediate Concerns: Apprehensions regarding security, privacy, and transparency
- Medium Term Concerns: Emerging concerns about the ethical implications of AI in areas such as military use, medical care, justice, and education
- Long Term Concerns: Fundamental enduring worries about the development and implementation of AI in society
Three laws of robotics by Isaac Asimov:
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
-
Value (Degree of importance of a thing or an action)
- Provides ideals and standards with which to evaluate things, choices, actions, and events
- Intrinsic Value i.e. happiness, freedom, wellbeing, etc.
- Extrinsic/Instrumental Value i.e. money
-
Norms (value-based principles, commands and imperatives)
- Tells what one should do, or what is expected of someone
- Prescriptive Norm - Encourages positive behavior i.e. “be fair”
- Proscriptive Norm - Discourages negative behavior i.e. “do not discriminate”
- Other Norms:
- Statistical Regularities - Many computer scientists tend to wear black T-shirts
- Social Norms - Tells what people in a group believe to be appropriate action in that group
- Moral norms - Are prescriptive or proscriptive rules with obligatory force beyond that of social or statistical expectations i.e. “Do not use AI for behaviour manipulation”
Hume´s guillotine by David Hume (1711–76)
One should not make normative claims about what should be, based only on descriptive statements about what is. This does not mean that facts do not take any part in our moral consideration, but that you cannot get from an “is” to an “ought” without the use of some purely normative value statement along the way.
For example, the fact that there is a biased data set does not alone imply that the data should (or shouldn’t) be biased. Instead, moral attitudes depend on other ethical considerations and preferences, not just mere facts. Why are we concerned with the issue of biased data? Well, the problem clearly is not the fact that there are biased data. The real problem is that biases may enhance discrimination.
-
Ethical frameworks
- According to a recent study (Jobin et al 2019), AI ethics has quite rapidly converged on a set of five principles:
- non-maleficence
- responsibility or accountability
- transparency and explainability
- justice and fairness
- respect for various human rights, such as privacy and security
- According to a recent study (Jobin et al 2019), AI ethics has quite rapidly converged on a set of five principles:
Ref. : Chapter 1, Ethics of AI
Top comments (0)