CS 101 › Lesson 8 of 8

Ethics in Computing

Lesson 8 · OKSTEM College · Associate of Science in Computer Science

Ethics in Computing

Computer scientists build systems used by billions of people. The ethical choices made in design — who is included, whose data is collected, how algorithms decide — have enormous real-world consequences.

Core Ethical Frameworks — click each to see how it applies to real CS decisions

📈 Consequentialism Maximize well-being, minimize harm

Core question: What action produces the best overall outcome for the most people?

Key thinkers: Jeremy Bentham, John Stuart Mill. Also known as utilitarianism.

How it applies to CS: A consequentialist engineer might accept collecting user data without explicit consent if the product genuinely improves millions of lives. They weigh the aggregate benefit against the privacy cost.

Real CS case study — Content Moderation: Should a social network auto-remove hate speech using an AI model that is 95% accurate? A consequentialist says yes: the harm prevented (millions of harmful posts removed) outweighs the false positives (5% of removed posts were benign). A different framework might refuse, because wrongly silencing someone violates a right regardless of the statistics.

Limitation: Outcomes are hard to predict. Who counts in the calculation? Facebook's algorithm optimized for engagement (a measurable "good") and unintentionally amplified outrage and misinformation.

📜 Deontology Rules and duties that must not be broken

Core question: Are there moral rules that must be followed regardless of consequences?

Key thinker: Immanuel Kant. His Categorical Imperative: "Act only according to rules you could will to be universal laws."

How it applies to CS: A deontologist would say: never deceive users, full stop. It doesn't matter if a "dark pattern" (deceptive UI) increases subscription revenue — deception is wrong categorically. Similarly, a deontologist says you must always ask for consent before collecting data, even if the data would help society.

Real CS case study — End-to-End Encryption: Apple refuses to build a government backdoor into iMessage, even when asked by law enforcement to help catch criminals. A deontological argument: users have a right to private communication — violating it is wrong regardless of who asks or why.

Limitation: Rules can conflict. "Never break encryption" and "prevent terrorist attacks" can pull in opposite directions. Kant's framework gives limited help when duties clash.

🌟 Virtue Ethics What would a person of good character do?

Core question: What action reflects the character traits of an honest, courageous, just, and prudent person?

Key thinker: Aristotle. Virtues are habits — you become honest by practicing honesty, not by following a rule.

How it applies to CS: Instead of asking "what rule applies here?" a virtue ethicist asks "what would an engineer of good character do?" Relevant virtues: honesty (don't ship features you know are harmful), courage (raise concerns even if management pushes back), prudence (think carefully about second-order effects).

Real CS case study — Whistleblowing: Frances Haugen leaked Facebook's internal research showing the platform knew Instagram harmed teen mental health. A virtue ethics analysis: honesty and courage required sharing that information, even at personal cost — a person of good character couldn't stay silent.

Limitation: Virtues vary across cultures and communities. What counts as "prudent" in Silicon Valley's "move fast and break things" culture differs from what counts as prudent in medical device engineering.

⚖️ Justice & Fairness Are outcomes fair to all affected parties?

Core question: Are the benefits and burdens of a decision distributed fairly? Who is included or excluded?

Key thinker: John Rawls. His veil of ignorance: design systems as if you don't know which role you'll play in them. You'd design fair systems if you might end up being the one most harmed.

How it applies to CS: Fairness in algorithms — does your model perform equally well across races, genders, and income levels? Facial recognition systems trained mostly on light-skinned faces have significantly higher error rates for dark-skinned faces. A justice framework says this is wrong regardless of accuracy overall.

Real CS case study — Credit Scoring Algorithms: An algorithm trained on historical loan data learns to reflect past discriminatory lending patterns. It accurately predicts default rates — but those rates were shaped by discriminatory practices. A justice analysis says the outcome is unfair: people are penalized for factors they couldn't control, and protected classes bear a disproportionate burden.

Limitation: "Fairness" has multiple mathematical definitions that are mathematically incompatible — equal false-positive rates, equal false-negative rates, and equal overall accuracy cannot all hold simultaneously when base rates differ between groups. Choosing which fairness metric matters is itself an ethical decision.

🤔 Using Frameworks Together Real decisions need multiple lenses

No single framework is complete. Real ethical decisions in CS require applying multiple frameworks and seeing where they agree or conflict.

Example — Should your company sell facial recognition to governments?

  • Consequentialist view: Depends on use case. Crime prevention benefit vs. mass surveillance harm — hard to measure in advance.
  • Deontological view: Citizens have a right not to be tracked without consent. Selling surveillance tools violates that right categorically.
  • Virtue ethics view: Would a courageous, honest engineer be comfortable with how this technology will be used? If not, don't sell it.
  • Justice view: Who bears the risk? Facial recognition is least accurate for dark-skinned women — the people most likely to be falsely flagged are those already most marginalized.

The ACM Code of Ethics (the professional standard for computer scientists) draws from all four frameworks — it requires considering consequences, respecting rights and privacy, acting with integrity, and promoting fairness.

Key Issues

ACM Code of Ethics

The Association for Computing Machinery (ACM) publishes the authoritative code of ethics for CS professionals. Key principles include:

  1. Contribute to society and human well-being.
  2. Avoid harm.
  3. Be honest and trustworthy.
  4. Be fair and take action not to discriminate.
  5. Respect privacy.
  6. Honor confidentiality.

Case study: A social media recommendation algorithm maximizes engagement. It learns that outrage drives more clicks than nuance. Is the engineer who built it ethically responsible for the resulting political polarization?

Lab — Ethical Framework Explorer

Knowledge Check

Algorithmic bias most commonly arises from

GDPR and CCPA are examples of

The ACM Code of Ethics is important because

Responsible disclosure means a security researcher should

A consequentialist evaluating an algorithm would primarily ask

← Previous