Episode 64 - Design Thinking, Vibe Coding, and the Human Side of AI with Jason VanLue
In this episode, Nathan welcomes Jason VanLue, VP of Product at Virtuous, to explore the intersection of design thinking, branding, and AI. Starting the conversation, Jason shares his unique journey into product development and storytelling. The duo dives into the importance of design principles, the art of vibe coding, and why the human touch matters more than ever in an AI-driven world.
Moreover, Jason previews his latest work, "The Three Pipe Problems," inspired by Sherlock Holmes and designed to solve complex challenges with empathy, creativity, and strategic thinking. Additionally, Nathan and Jason explore the evolving role of storytelling in the nonprofit sector, the critical skills required for the future workforce, and the potential of AI as a tool rather than a replacement.
HIGHLIGHTS
[05:15] Jason's Journey
[11:09] Jason's Book "Branding Matters" and Its Impact
[22:18] The Concept of the "Three Pipe Problems"
[28:34] The Role of AI in Problem Solving
[37:32] Vibe Coding and Its Implications
[41:42] Future of Work and Hiring Practices
REOSOURCES:
Connect with Jason:
LinkedIn: linkedin.com/in/jasonvanlue
Website: jasonvanlue.com
Website: virtuous.org
Mentioned in the episode:
Zag: The Number One Strategy of High-Performance Brands: amazon.com/Zag-Number-Strategy-High-Performance-Brands
Branding Matters: amazon.com/Branding-Matters-Jason-VanLue
Three Pipe Problems: jasonvanlue.com/book
Connect with Nathan and Scott:
LinkedIn (Nathan): linkedin.com/in/nathanchappell/
LinkedIn (Scott): linkedin.com/in/scott-rosenkrans
Website: fundraising.ai/
--------
46:37
--------
46:37
Episode 63 - Geneva AI for Good Summit: Ethics, Energy, And Accountability
The term ‘Responsible AI’ is more than a buzzword; it’s a call to action. When we talk about responsible AI, it’s not about some fancy tech tools; it’s about power, ethics, leadership, and long-term consequences. The question we need to ask is ‘Who defines what’s safe? Who decides what’s ethical?’. At present a handful of tech giants shape the answers to those questions. While using AI might feel like a moderate impact, developing these models comes with a great environmental cost. We live in a world that moves at hyperspeed where it’s not healthy to treat AI as just a tech tool. The time has come for nonprofits and leaders to step up and lead with responsibility.
In this week’s episode, Scott and Nathan talk about the ever-evolving landscape of AI and the foundation of AI governance. AI technologies are generally developing at a remarkable rate, but the governance aspect is only slowly progressing. This mismatch shows how the regulations are trying to catch up instead of preventing harm. Starting the conversation, Nathan shares his thoughts on the need for an adaptive, forward-thinking governance framework for AI that is able to anticipate risk and not just to respond. Next, Nathan and Scott discuss the technological and geopolitical power in AI, where a handful of tech giants control the system, leaving the rest of us to decide if their definition of ‘Responsible AI’ matches ours. Nathan kindly explains why responsible AI should be everyone’s responsibility. We are in a dire need of drawing ethical lines, defining values, and demanding a transparent AI governance system before harm scales beyond our grasp. Further down in the conversation, Nathan and Scott pay attention to the following topics: challenges in AI governance, guardrails for using AI, the role of leadership in responsible AI use, environmental impact on developing AI, and more.
HIGHLIGHTS
[01:06] Governance and AI.
[04:04] The lack of progress in the governance framework.
[09:11] Challenges in AI governance.
[13:02] The concentration of technological and geopolitical power in AI.
[15:10] The importance of having guardrails of how to use AI.
[20:21] The role of leadership in responsible AI.
[25:31] The choice between acting in the fog or becoming irrelevant.
[27:50] Navigating the ethics and safety in AI.
[30:15] Environmental impact of AI.
RESOURCES
Laying the Foundation for AI Governance.
aiforgood.itu.int/event/building-secure-and-trustworthy-ai-foundations-for-ai-governance/
Connect with Nathan and Scott:
LinkedIn (Nathan): linkedin.com/in/nathanchappell/
LinkedIn (Scott): linkedin.com/in/scott-rosenkrans
Website: fundraising.ai/
--------
38:11
--------
38:11
Episode 62 - Geneva AI for Good Summit: Insights on AI & Humanity
Today's conversation from the AI for Good Summit in Geneva, Nathan and Scott explore the evolving role of AI in shaping a better world. The summit, held under the United Nations banner and inspired by Neil Sahota, marks its 10th anniversary by focusing on how AI can accelerate the achievement of the Sustainable Development Goals (SDGs). The discussion encompasses diverse topics, ranging from quantum technology to AI-assisted robotics, but it's the human impact that takes center stage.
Nathan and Scott reflect on the cultural nuances of AI adoption in the US, UK, and Mongolia, highlighting how urgency, regulation, and societal mindset influence progress. They examine the growing importance of AI governance and trust, particularly in the nonprofit sector, where AI is viewed not as a replacement for fundraisers but as a tool to strengthen relationships and enhance decision-making.
The episode encourages nonprofits to adopt internal AI policies, ask better questions of their vendors, and prioritize empathy and human connection in AI implementation. With practical tips and personal routines shared, this conversation reinforces a critical message: successful AI adoption starts with people, not just tech. As AI transforms industries, the focus must remain on curiosity, values, and long-term impact.
HIGHLIGHTS
[0:07] AI for Good Summit Overview
[03:09] Impact of AI on Workforce and Governance
[06:16] Cultural Differences in AI Adoption
[10:17] AI's Role in the Nonprofit Sector
[22:07] Imagination, Boundaries, and Thriving with AI
[27:07] AI Governance and Trust
[33:29] Practical Tips for AI Adoption
Connect with Nathan and Scott:
LinkedIn (Nathan): linkedin.com/in/nathanchappell/
LinkedIn (Scott): linkedin.com/in/scott-rosenkrans
Website: fundraising.ai/
--------
40:42
--------
40:42
Episode 61 - Navigating Super Intelligence, Governance, and Human-First Transformation
In the rapidly accelerating world of Artificial Intelligence, the pace of innovation can feel overwhelming. From groundbreaking advancements to the ongoing debate about governance and ethical implications, AI is not just a tool; it's a transformative force. As we race towards super intelligence and navigate increasingly sophisticated models, how do we ensure that human values remain at the core of this technological revolution? How do we, especially in the trust-based nonprofit sector, lead with intentionality and ensure AI serves humanity rather than superseding it?
In this episode, Nathan and Scott dive into the relentless evolution of AI, highlighting Meta's staggering $15 billion investment in the race for super intelligence and the critical absence of robust regulation. They reflect on the essential shift from viewing AI adoption as a finite "destination" to embracing it as an ongoing "journey." Nathan shares insights on how AI amplifies human capabilities, particularly for those who are "marginally" good at certain skills, advocating for finding your "why" and offloading tasks AI can do better. Scott discusses his recent AI governance certification, underscoring the complexities and lack of "meat on the bone" in US regulations compared to the EU AI Act. The conversation also explores the concept of AI agents, offering practical tips for leveraging them, even for those with no coding experience. They conclude with a powerful reminder: AI is a mirror reflecting our values, and the nonprofit sector has a vital role in shaping its ethical future.
HIGHLIGHTS
[01:15] AI Transformation: A Journey, Not a Destination [03.00] If AI Can Do It Better: Finding Your Human "Why"
[04:05] AI Outperforming Human Capabilities
[05:00] Meta's $15 Billion Investment in Super Intelligence [07:16] The Manipulative Nature of AI and the "Arms Race" for Super Intelligence
[09:27] The Importance and Challenges of AI Governance and Regulation
[14:50] AI as a Compass, Not a Silver Bullet
[16:39] Beware the AI Finish Line Illusion
[18:12] Small Steps, Sustained Momentum: The "Baby Steps" Approach to AI
[26:48] Tip of the Week: The Rise of AI Agents and Practical Use Cases
[32.24] The Power of Curiosity in AI Exploration
RESOURCES
Relay.app: relay.app
Zapier: zapier.com
Make.com: make.com
N.io: n.io
Connect with Nathan and Scott:
LinkedIn (Nathan): linkedin.com/in/nathanchappell/
LinkedIn (Scott): linkedin.com/in/scott-rosenkrans
Website: fundraising.ai/
--------
31:33
--------
31:33
Episode 60 - Authenticity Over Perfection: AI, Trust, and the Future of Healthcare Philanthropy
AI is no longer a remote concept; it’s here, reshaping how we communicate, work, and lead. But in a value-based sector such as healthcare philanthropy, AI adoption isn’t just about innovation; it’s about alignment, trust, and purpose. Curiosity fuels exploration, and authenticity remains the most valuable currency of our time. As organizations struggle with bureaucracy, privacy, and ethical considerations, the only thing that remains true is that moving forward with AI needs more than just technology; it requires intentionality. In this week’s episode, Nathan and Scott explore the importance of staying true to yourself in the age of AI.
Nathan and Scott address you from the first AHP association for healthcare philanthropy, the AI Conference, and highlight the slow AI adoption of the healthcare industry due to bureaucracy and privacy concerns. Then, Nathan and Scott talk about the top two use cases of generative AI in the year 2025: ‘Therapy Companionship’ and ‘Organizing My Life.’ Nathan kindly explains the concept of ‘Great Flattening, ’ which is basically a return to the authenticity stage from internet perfection. Nathan also reminds us that the things technology can do are awesome, but it’s important to remember who we are without those curated perfections. They also share their take on the limitations of AI when it comes to providing emotional support. AI or any kind of bot can project and pretend, but it cannot ever feel the way humans do. Nathan further points out that the inherent human need to feel heard couldn’t be fulfilled by a bot.
Nathan and Scott also discuss the following topics further in the conversation: artificial intimacy, the manipulative nature of AI, and practical use cases for nonprofit AI.
HIGHLIGHTS
[01.13] AHP’s summit on Artificial Intelligence.
[05.17] Enthusiasm, interest, and curiosity around AI in healthcare philanthropy.
[11.12] The number 1 generative AI use case in 2025.
[12.00] The high usage of AI therapy and companionship among certain age groups.
[18.05] The Great Flattening.
[22.00] The limitations of AI in providing emotional support.
[25.20] Artificial intimacy and its effect on the human mind.
[28.09] The manipulative nature of AI.
[36.24] Practical use cases for nonprofit AI.
[37.30] Tip of the Week – Level up your prompts by customizing how AI thinks.
[40.00] Tip of the Week 2 - LMArena.ai for comparing different AI models.
RESOURCES
LMArena AI - lmarena.ai/
Personality and Persuasion by Ethan Mollick
oneusefulthing.org/p/personality-and-persuasion?utm_source=post-email-title&publication_id=1180644
Artificial Intimacy - by CHT and MIT
youtube.com/watch?v=hJ80NqXlGSk
Connect with Nathan and Scott:
LinkedIn (Nathan): linkedin.com/in/nathanchappell/
LinkedIn (Scott): linkedin.com/in/scott-rosenkrans
Website: fundraising.ai/
The need for change in the fundraising sector has never been greater. But innovations in technology are evolving faster than our ability to understand them. Are these advances good or bad? Do they have unintended consequences? And what is the application to the fundraising sector? That’s what we’re here to find out. Welcome to the FundraisingAI podcast, your up-to-date resource for beneficial and responsible AI for the global fundraising community.
In our podcast, we explore the intersection where philanthropy meets technology, delving into beneficial and responsible AI practices and their impact on fundraising. Join us for engaging discussions with industry experts and thought leaders, where we uncover:
Latest Trends in AI Development: Updates on advancements in AI technologies and their applied application to the global fundraising sector.
Transparent Fundraising Strategies: Using AI beneficially to build trust and accountability in donor relationships.
AI for Social Impact: Harnessing AI technologies to drive meaningful change and amplify impact.
Data Privacy and Security: Safeguarding donor information in an increasingly digital world.
Beneficial and Responsible AI Development: Crafting algorithms that prioritize fairness, transparency, and inclusivity.
Collaborative Initiatives: Exploring partnerships that promote responsible AI adoption and innovation.
Whether you’re a nonprofit professional, fundraiser, or technology enthusiast, Fundraising.AI equips you with actionable insights and best practices to navigate the evolving landscape of fundraising and AI. Join us on this journey towards creating a more generous future for all.