What is bias in AI systems? Provide some examples.
How do societal biases get reflected in AI models?
Explain the difference between data bias and algorithmic bias.
What techniques can be used to detect bias in AI systems?
How can datasets be made more representative to mitigate bias?
What is the significance of fairness in AI, and how do you define it?
Explain demographic parity and its importance in AI fairness.
What challenges do organizations face in implementing fairness in AI models?
Can AI systems ever be completely free of bias? Why or why not?
How does regular auditing of AI systems help reduce bias?
What do you understand by AI safety, and why is it critical?
Explain the risks of adversarial attacks on AI models.
How can unintended consequences in AI behavior be avoided?
What measures can ensure the robustness of AI systems?
What is meant by verification and validation in the context of AI safety?