Practice Questions & Tips
This final lesson brings everything together with rapid-fire questions to test your recall, scenario-based ethics dilemmas to test your reasoning, and practical tips for demonstrating ethical maturity in interviews. Treat this as your final review before the real thing.
Rapid-Fire Questions
Practice answering each of these in 60–90 seconds. Concise, structured answers beat long, rambling ones.
| # | Question | Key Points to Hit |
|---|---|---|
| 1 | Name three types of bias in ML. | Historical, representation, measurement. Give a one-sentence example of each. |
| 2 | What is demographic parity? | Equal positive prediction rates across groups. Ignores base rate differences. Cannot hold simultaneously with calibration when base rates differ. |
| 3 | What is SHAP? | Shapley values for feature attribution. Game-theoretic foundation. Gives both local and global explanations. Stronger guarantees than LIME. |
| 4 | What is differential privacy? | Mathematical guarantee that output does not change significantly with any one person's data. Epsilon parameter controls privacy-utility trade-off. Used by Apple, Google, US Census. |
| 5 | What are the EU AI Act risk categories? | Unacceptable (banned), high (heavily regulated), limited (transparency), minimal (no restrictions). Risk-based approach. |
| 6 | What is a model card? | Standardized documentation: model details, intended use, performance metrics by group, limitations, ethical considerations. "Nutrition label" for AI. |
| 7 | What is federated learning? | Train models across devices without centralizing data. Gradients shared, not data. Needs DP and secure aggregation for true privacy. |
| 8 | What is the COMPAS case? | Recidivism prediction tool. Different false positive rates by race. Demonstrates impossibility of satisfying all fairness metrics simultaneously. |
| 9 | Name two risks of post-hoc explanations. | Unfaithfulness (explanation may not reflect actual reasoning). Gaming (users learn to manipulate highlighted features). |
| 10 | What is machine unlearning? | Removing the influence of specific data from a trained model. Required for right-to-be-forgotten compliance. Full retraining is gold standard but expensive. |
Scenario-Based Ethics Dilemmas
These scenarios test your ability to reason through complex trade-offs. Practice thinking through each one out loud, using the ETHICS framework (Evaluate stakeholders, Think about what could go wrong, How to measure harm, Implement safeguards, Communicate transparently, Sustain responsibility).
Scenario 1: The Biased Healthcare Model
Situation: Your team has built a model that predicts which patients will develop complications after surgery. The model has 94% accuracy overall, but you discover it has 88% accuracy for Black patients and 96% for white patients. The hospital wants to deploy it next month because it will save lives overall. What do you do?
Key considerations: Deploying now saves lives but puts Black patients at higher risk. Delaying deployment to fix the disparity delays life-saving benefits for all patients. The disparity may reflect historical data biases (less data on Black patients, different treatment patterns) rather than inherent model limitations.
Strong answer elements: Deploy with explicit human oversight for patients in the lower-accuracy demographic. Add a confidence flag that alerts clinicians when the model's prediction may be less reliable. Simultaneously fast-track data collection and model improvement for the underserved group. Be transparent with the hospital about the limitation. Set a concrete timeline for achieving equitable performance, with the option to withdraw if it is not met.
Scenario 2: The Content Moderation Dilemma
Situation: Your content moderation AI correctly removes 99.5% of hate speech, but it disproportionately flags content from African American users because it misclassifies African American Vernacular English (AAVE) as toxic. Civil rights organizations are publicly criticizing the platform. Product leadership wants a quick fix. What do you recommend?
Key considerations: Reducing sensitivity to fix the AAVE problem might allow more actual hate speech through. The issue is fundamentally about training data that treats AAVE as non-standard. Quick fixes (whitelisting AAVE terms) may miss the systemic issue.
Strong answer elements: Short-term: add human review for content flagged from users whose language patterns match AAVE characteristics (not race-based, language-pattern-based). Long-term: retrain the toxicity model with data that properly represents AAVE as legitimate language variation. Engage with the African American community to understand which content is actually harmful versus culturally normative. Publish transparency reports showing flagging rates by demographic.
Scenario 3: The Competitive Pressure
Situation: Your competitor just launched a facial recognition product for retail stores that identifies repeat shoplifters. Your CEO wants to launch a competing product within 3 months. Your team's analysis shows the technology has a 3x higher false positive rate for Black individuals. The CEO says "ship it with a disclaimer" and argues that not shipping loses market share to a competitor whose product is probably worse.
Key considerations: False positives mean innocent Black customers being flagged and potentially confronted by security — a genuinely harmful outcome. The "competitor is worse" argument does not absolve your company of responsibility. Market pressure is real but does not override ethical obligations.
Strong answer elements: Push back on shipping with known disparate impact. Propose an alternative: launch with a use case that does not involve identifying individuals (aggregate foot traffic analytics, inventory optimization). Use the 3 months to improve the facial recognition model's equity. If the CEO insists, escalate to the ethics board or legal team, documenting the known disparity and the potential for discriminatory impact lawsuits.
Scenario 4: The Data Ethics Trade-off
Situation: Your team discovers that a publicly available dataset used to train your medical imaging model contains images scraped from a hospital's unsecured server. The patients never consented to their images being used for AI training. The model works well and could save lives. Collecting a new consented dataset would take 18 months and cost $2 million.
Key considerations: Using data obtained without consent, even if publicly available, is ethically problematic and potentially illegal under GDPR. However, the model has genuine life-saving potential. The patients may not know their data was exposed, adding a notification obligation.
Strong answer elements: Stop using the dataset immediately. Notify the hospital about the security breach so they can secure the server and notify affected patients. Begin collecting a properly consented dataset. In the interim, explore whether existing consented medical imaging datasets could serve as a stopgap. Consider whether the trained model weights need to be discarded (machine unlearning) or whether the model can be retained while the training data is replaced. Document everything for legal and compliance review.
Interview Tips for AI Ethics Questions
Use the ETHICS Framework
Structure every answer: Evaluate stakeholders, Think about risks, How to measure harm, Implement safeguards, Communicate transparently, Sustain responsibility. This prevents you from missing key dimensions.
Be Concrete, Not Abstract
"Bias is bad and we should fix it" earns zero points. "I would run fairness audits using equalized odds across race and gender, set thresholds at 5% disparity, and implement automated alerts" earns full marks.
Acknowledge Trade-offs
Ethics questions rarely have clean answers. Show you understand competing values: privacy vs safety, fairness vs accuracy, innovation speed vs responsible deployment. The interviewer wants to see your reasoning, not a "right" answer.
Reference Real Examples
Cite COMPAS, Gender Shades, Amazon hiring, GPT-2 staged release, Google Project Maven, and other real cases. This shows you study the field and learn from precedent, not just theory.
Propose Practical Mitigations
For every risk you identify, propose a specific mitigation. "This model could exhibit racial bias" is incomplete. "I would test for racial bias using equalized odds, and if disparity exceeds our threshold, I would implement group-specific threshold adjustment while collecting more representative data" is complete.
Know Your Company
Research the company's AI ethics stance before the interview. Know their published principles, any controversies, and their responsible AI tools. Reference these in your answers: "I know Microsoft uses Fairlearn for this; I would apply a similar approach here."
Frequently Asked Questions
Are AI ethics questions asked for engineering roles, or just PM and policy roles?
Increasingly for all roles. Google, Meta, Microsoft, and Amazon now include ethics-related questions in ML engineering interview loops. The format differs: engineers get questions about implementing fairness metrics and debiasing techniques; PMs get questions about product decisions and stakeholder communication; researchers get questions about dual-use risks and responsible publication. Even frontend engineers working on AI-powered features may be asked about user consent and transparency in UX design.
How technical do AI ethics answers need to be?
Match the technical depth to the role. For ML engineering roles, mention specific techniques: SHAP values, DP-SGD with epsilon of 8, equalized odds, adversarial debiasing. For PM roles, focus on frameworks, stakeholder impact, and how you would work with the ML team: "I would ask the team to run fairness audits and present the results before we approve launch." For both, combine technical awareness with practical reasoning. The worst answer at any level is purely abstract philosophy with no actionable steps.
What if the interviewer disagrees with my ethical position?
This is often intentional. Interviewers push back to test how you defend your reasoning under pressure. The key: defend your position with evidence and reasoning, not emotion. Acknowledge valid points in the counter-argument. Be willing to update your position if presented with new information, but do not flip-flop just to please the interviewer. "That is a fair point about the trade-off between speed and safety. Given that, I would still prioritize the fairness testing, but I would propose a faster approach: automated testing in the CI/CD pipeline rather than a manual review gate, which achieves the same goal with less delay."
Should I bring up ethics proactively or wait for the interviewer to ask?
Proactively, but naturally. When designing a system, mention bias and fairness considerations as part of your design, not as an afterthought. "For the training data, I would audit for demographic representation and historical biases before training" is natural. Do not shoehorn ethics into unrelated questions — if asked about database indexing, you do not need to discuss AI fairness. The goal is to show that ethics is part of how you think, not a separate checklist you memorized.
What are the most common AI ethics interview mistakes?
Five common mistakes: (1) Being too abstract — philosophical musings about AI and society without concrete, actionable answers. (2) Being too absolute — "we should never use AI for X" without acknowledging context and trade-offs. (3) Ignoring implementation — identifying problems without proposing solutions. (4) Treating ethics as someone else's job — "the ethics team handles that" signals you do not take personal responsibility. (5) Not knowing real cases — if you cannot cite a single real-world AI ethics incident, it signals you have not engaged with the field beyond surface-level preparation.
How do I prepare for AI ethics questions if I have no ethics background?
You do not need a philosophy degree. Focus on: (1) This course — it covers the technical, legal, and practical dimensions that interviews actually test. (2) Real cases — read about COMPAS, Gender Shades, Amazon hiring, GPT-2 release, Google Gemini image generation controversy. Understand what went wrong and what should have been done differently. (3) Regulatory basics — know the EU AI Act risk categories, GDPR's automated decision provisions, and the NIST AI RMF at a high level. (4) Tools — try Fairlearn or AI Fairness 360 on a sample dataset. Hands-on experience with fairness testing is more valuable than reading 10 papers about it. (5) Your own products — think about the AI products you use daily and identify their ethical dimensions. This builds intuition.
Is it okay to say "I do not know" to an AI ethics question?
Yes, if followed by reasoning. "I do not know the exact provisions of the EU AI Act for this scenario, but based on my understanding of risk-based regulation, I would expect this application to fall into the high-risk category, which means we would need X, Y, and Z safeguards" is a strong answer. "I do not know" followed by silence is a weak answer. Interviewers evaluate your reasoning process, not your ability to memorize regulations. Honest acknowledgment of uncertainty paired with structured reasoning is one of the strongest signals of ethical maturity.
What resources should I study beyond this course?
Recommended reading: (1) "Weapons of Math Destruction" by Cathy O'Neil — the canonical book on algorithmic bias. (2) "Race After Technology" by Ruha Benjamin — how technology reproduces racial inequality. (3) The Gender Shades paper by Buolamwini and Gebru. (4) Google's AI Principles and Microsoft's Responsible AI Standard — see how companies operationalize ethics. (5) The EU AI Act text (at least the risk classification section). (6) NIST AI RMF documentation. (7) Fairlearn documentation and tutorials for hands-on fairness testing. (8) The Alignment Forum and Anthropic's research blog for frontier AI safety perspectives.
Final Checklist
- Name and explain at least 4 types of ML bias with real-world examples
- Compare demographic parity, equalized odds, and calibration — and explain why you cannot have all three
- Explain SHAP and LIME in plain language, including their limitations
- Describe differential privacy and why it matters for AI training
- Summarize the EU AI Act's risk categories and key requirements
- Discuss the COMPAS case and its implications for fairness definitions
- Walk through how you would detect and respond to bias in a deployed model
- Explain the right to be forgotten and the challenge of machine unlearning
- Discuss at least 3 societal impact concerns (deepfakes, job displacement, surveillance, environment)
- Propose practical mitigations for every ethical risk you identify
- Articulate your personal ethical framework for AI decision-making
- Reference at least 5 real-world AI ethics cases with specific lessons learned
Lilly Tech Systems