Moral Boundaries of AI

Where do we draw a line in the sand when it comes to creating morally ambiguous AI solutions?

Patrick Heller
4 min readMar 22, 2023


One of the conundrums we will face more and more is the programming of Artificial Intelligence. With the advent of next-gen AI chatbots like ChatGPT, a famous quote from the first Jurrasic Park movie comes to mind: “Your scientists were so preoccupied with whether they could, they didn’t stop to think if they should.”

Bot programming is already a hot topic in many large organizations. Keeping up a human service desk for customers is often an expensive matter which companies like to automate as soon as possible.

These bots — short for robots — “know” what to answer when a customer enters a question. Only a couple of years ago, these bots were not much more than the FAQ page (Frequently Asked Questions webpage) in the format of a simple conversation. Nowadays, many of these bots pose as real humans and they sometimes get away with it too.

Many online service desk chats start with a bot and gradually go over into a conversation with a real human when the talk becomes too difficult for the bot to maintain. It is often hard to tell where the bot ended and the human started in such online (typed) conversations. In the near future, these bots will get more and more sophisticated…