Robotic Answers From Machines

I listen to a handful of podcasts most weeks, and two of them highlighted the problems of ai (you may note that this is not in all caps as usual, keep reading to see why) within the past 24 hours.  Both of them spoke to the same issue, and this got me thinking.

In one of the podcasts, it was highlighted that some ai models used by children (for ‘companionship’) have given poor counsel, leading to self-harm, estrangement from family, rejection of theism, and even in some cases suicidal ideations.  The other podcast was more tame, but still took a fairly radical stance on what at first seemed like a strange line in the sand.  It was proposed in this last podcast that we refuse to give in to the common misperception that ai has a gender or is human.  When the algorithm responds, refuse to call it a “he” or “she”.  The podcast was recommending that we force ourselves to remember that ai is merely a tool and not a person.  It is NOT capable of wise counsel.  It cannot respond in a responsible way.  He even went so far as to say we should never thank ai.  Thanking it betrays the notion in our mind that it is sentient and has a will.  It does not.

An algorithm spits back data.  Ai will produce language in response to prompts by taking in large samplings of human-produced texts.  But ai will never decide if this language is beneficial, useful, helpful, or MORAL.    A human knows when to respond with compassion.  And further, a human has moral responsibility for what they do and do not say.  We might chastise a person for responding in a robotic way to a moment of deep need, or even deep feeling.  So why would we want to turn to a robot for a response to important things?

While I know that this is not likely addressing anyone who is reading this directly,  I think very few people who would read this blog are actually out there utilizing ChatGPT as a therapist.  But I do believe there are people reading this who have children that are going to be raised in a confusing world.  If we do not make it a point to teach our children the very important distinction between algorithms and the wisdom of living and breathing morally responsible humans, they will certainly be a generation raised to overvalue and even take relational cues from large language models.

Some of you might even still be wondering why this matters.  Who cares if a generation finds comfort, advice, or companionship from an algorithm?  Just ask yourself who you want your kids turning to for dating advice.  Where do you want them to go when they are angry with you?  Or even darker but more serious, is the way that ChatGPT might answer the question, “should I just kill myself?”

I think my generation lacks creative imagination when it comes to the potential darknesses that ai might bring.  We survived the transition from cassette tapes to CDs to streaming.  We survived the invention of the internet (thanks, Al Gore).  We even have done okay transitioning to the smartphone.

But blurring the line between human relationships and trusting a bot for advice is pretty scary territory.  Every parent of young children must not only be concerned with limiting screen time and not only be concerned with WHEN to allow our children to get a smartphone, but now we must also be leaning in heavily on the distinction between compassionate human advice and the artificial kind.

The artificial is always available.  The artificial will be increasingly difficult to regulate.  And the artificial has no genuine moral compass.

I was shocked to wrestle with the implications of this future for our young people.  And the challenges for young parents keep multiplying.  God is faithful to each generation.  And that faithfulness often takes the form of loving and concerned parents who watch out for the dangers that our children will need to navigate.

Recast Church