Moral Boundaries of AI

Where do we draw a line in the sand when it comes to creating morally ambiguous AI solutions?

Patrick Heller
4 min readMar 22

--

One of the conundrums we will face more and more is the programming of Artificial Intelligence. With the advent of next-gen AI chatbots like ChatGPT, a famous quote from the first Jurrasic Park movie comes to mind: “Your scientists were so preoccupied with whether they could, they didn’t stop to think if they should.”

Bot programming is already a hot topic in many large organizations. Keeping up a human service desk for customers is often an expensive matter which companies like to automate as soon as possible.

These bots — short for robots — “know” what to answer when a customer enters a question. Only a couple of years ago, these bots were not much more than the FAQ page (Frequently Asked Questions webpage) in the format of a simple conversation. Nowadays, many of these bots pose as real humans and they sometimes get away with it too.

Many online service desk chats start with a bot and gradually go over into a conversation with a real human when the talk becomes too difficult for the bot to maintain. It is often hard to tell where the bot ended and the human started in such online (typed) conversations. In the near future, these bots will get more and more sophisticated and more humanlike in their conversations, like the now-popular ChatGPT.

If you’re programming these bots and their conversational algorithms, you have to consider morality at some point. If you’re programming health-care-related bots, the conversations can become very private and delicate. If you’re programming bots for a dating app, you might be taking advantage of the heartfelt feelings of others toward someone who doesn’t really exist.

Where do you draw the line? Of course, that is first and foremost a question for the owner of the bot, the person who has the ultimate say over what the bot will act like, but what about your own moral sense? Will you just program anything they ask of you, or do you draw a line in the sand? And what if you’re coaching a team that develops such bots and you feel a line is being crossed, do you draw attention to that, or do you leave it to the team and the business owner?

Soon — and for some perhaps already current — we will need conversations about morality at work. Some of that conversation will no doubt wander into the realm of philosophy rather than psychology, but it pays off to understand the psychological origins of morality.

Most psychologists consider morality a product of human evolution. When we were trying to survive on the plains of Africa — amongst many deadly predators — it paid off for humans to work together. If you start living together in a social structure, such as a tribe, it helps if you can trust the rest of the group.

We’ve seen in an earlier article — about reference groups — how important it is to feel accepted and respected by the group. If you share the same ideas about what is right and what is wrong, that puts your mind at ease and allows the group members to let their guard down and spend time and energy on making progress for the entire tribe.

People in those days with lower standards for what is right and what is wrong would have had more trouble assimilating into a tribe, and would therefore have had much worse opportunities to reproduce.

From an evolutionary perspective, that would result in fewer a-social genes and more “moral” genes being spread. Over time, the human population was thus gradually equipped with a high sense of morality. For many other species who prefer living in groups, the very same principle applies.

There are two interesting moral implications of the current AI developments. One of them is the question posed above: what do you do when you get paid to create an amoral AI? What if you are tasked to program a healthcare bot that is supposed to sell expensive medicine, possibly disregarding the best solution for the patient? Or what if you’re asked to create a human-mimicking dating bot whose purpose it is to get people to buy expensive gifts? The moral fabric of humans will be tested here.

On the other hand, there is the question of the moral standards of the AI itself. Any AI chatbot will not feel the evolutionary need to fit in a group. It could act amorally without consequences — or at least without feeling consequences. But, then again, the evolution of AI might go so rapidly that an AI will autonomously improve itself and cross what is called the technological singularity, and — who knows — develop a moral system of its own. The moral implications for us, humans, in that case, are very much unknown.

If you are interested in stories like these and more, you can buy Essential Psychology for Modern Organizations from Amazon and other bookstores: https://www.amazon.com/Essential-Psychology-Modern-Organizations-scientifically/dp/B08NP12D77/

Book cover

--

--