The principle of beneficence says “do good”, while the principle of non-maleficence states “do no harm”.
AI ethics aims to mitigate ethical risks, such as discrimination and privacy, and physical and social harm, in AI applications, with a primary focus on non-maleficence.
Moral problems are seen as things that can be solved by technical “fixes”, or by good design alone.
Technical systems often ignore issues covering wider ethical and societal context that direct the control, governance and societal dimensions.
-
Consider whether the city’s health care organisation should move from “reactive” to “preventive” healthcare.
- Benefits
- Sickness prevention has a lot of potential to improve the health and quality of life for citizens
- Allows better impact estimation and planning of basic healthcare services
- Potential to significantly reduce social and healthcare costs
- Problems
- The systems raise a number of legal and ethical issues regarding privacy, security, and the use of data
- Where is the border between acceptable prevention and non-acceptable intrusion?
- Does the city have a right to use private, sensitive medical data for identifying high-risk patients?
- How is consent to be given, and what will happen to people who don’t give their consent?
- What about those people who do not give consent because they are not able to?
- Raises the fundamental question of the city’s role
- If the city has information about a potential health risk and does not act upon the data, is the city guilty of negligence?
- Are citizens treated equally in the physical and digital worlds?
- If a person passes out in real life, we call an ambulance without having explicit permission to do so.
- In the digital world, privacy concerns may prevent us from contacting citizens
- The systems raise a number of legal and ethical issues regarding privacy, security, and the use of data
- If your answer is something like “yes, the city should seek an ethically and legally acceptable way to use those methods – there are so many advantages compared to the possible risks”, you were probably using a form of moral reasoning called "utilitarianism".
- Benefits
Ref. : Ethics of AI
Top comments (0)