Wendell Wallach, who has been in the AI ethics game longer than just about anyone and has several books to his name on the subject, talks about his dissatisfaction with talk of “value alignment,” why traditional moral theories are not helpful for doing AI ethics, and how we can do better.Advertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy
--------
42:22
--------
42:22
Is AI a Person or a Thing… or Neither?
It would be crazy to attribute legal personhood to AI, right? But then again, corporations are regarded as legal persons and there seems to be good reason for doing so. In fact, some rivers are classified as legal persons. My guest, David Gunkel, author of many books including “Person Thing Robot” argues that the classic legal distinction between ‘person’ and ‘thing’ doesn’t apply well to AI. How should we regard AI in a way that allows us to create it in a legally responsible way? All that and more in today’s episodeAdvertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy
--------
47:25
--------
47:25
How Do You Control Unpredictable AI?
LLMs behave in unpredictable ways. That’s a gift and a curse. It both allows for its “creativity” and makes it hard to control (a bit like a real artist, actually). In this episode, we focus on the cyber risks of AI with Walter Haydock, a former national security policy advisor and the Founder of StackAware.Advertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy
--------
51:38
--------
51:38
The AI Job Interviewer
AI can stand between you and getting a job. That means for you to make money and support yourself and your family, you may have to convince an AI that you’re the right person for the job. And yet, AI can be biased and fail in all sorts of ways. This is a conversation with Hilke Schellmann, investigative journalist and author of ‘The algorithm” along with her colleague Mona Sloane, Ph.D., is an Assistant Professor of Data Science and Media Studies at the University of Virginia. We discuss Hilke’s book and all the ways things go sideways when people are looking for work in the AI era. Originally aired in season one. Advertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy
--------
42:13
--------
42:13
Accuracy Isn’t Enough
We want accurate AI, right? As long as it’s accurate, we’re all good? My guest, Will Landecker, CEO Accountable Algorithm, explains why accuracy is just one metric among many to aim for. In fact, we have to make tradeoffs across things like accuracy, relevance, and normative (including ethical) considerations in order to get a usable model. We also cover whether explainability is important and whether it’s even on the menu and the risks of multi-agentic AI systems.Advertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy
I have to roll my eyes at the constant click bait headlines on technology and ethics. If we want to get anything done, we need to go deeper. That’s where I come in. I’m Reid Blackman, a former philosophy professor turned AI ethics advisor to government and business. If you’re looking for a podcast that has no tolerance for the superficial, try out Ethical Machines.