Published Date : 31/05/2025
Gone are the days when your company or organization must decide if they will use artificial intelligence. It is now just a matter of how. With the vast increase of AI employment, it was inevitable that some ethically questionable use cases would pop-up.
Students using chatbots like ChatGPT to write papers is an obvious example, but the reverse is equally worrying. As reported in The New York Times, a business professor at a Boston-area university was allegedly using ChatGPT to grade papers and mistakenly left the prompt in when returning the comments to students. Given the soaring cost of higher education, the student was understandably concerned and requested a tuition refund for this course.
While this situation is clearly ethically compromised—don’t tell your students or employees not to use chatbots and then turn around and do it yourself—the majority of AI practices likely fall into a gray area. It would be handy to have black-and-white ethical guidelines. In theory, not too much to ask. In practice, it would take an entire career of research, writing, and teaching to fully flesh out all the ethical implications associated with generative models.
But there is a distinction that allows us to establish some general best practices when dealing with AI. The discussion of how to ethically approach artificial intelligence or machine learning began long before the actual technology emerged. The genesis can likely be traced to the landmark 1950 paper “Computing Machinery and Intelligence” by Alan Turing. The paper introduced the concept of the Turing test, a method for determining whether a machine can exhibit what humans understand as intelligence.
In simplest terms, the Turing test puts a machine behind a curtain. A human, on the other side of that curtain, asks a series of questions to the hidden machine. If the person is not able to discern whether it is a machine or another person giving the responses, the machine passes the test. Nearly no machines can pass the Turing test. What we are left with is a technology that is not intelligent by human standards, and it is therefore an object. This determination then shapes the ethical conversation around that machine. You do not need to treat it as something with agency. Rather, it should be viewed as any other tool, a means to an end.
Examples of this kind of object technology could include computers, telephones, or automobiles. The ethical questions that come up for these machines are not about the things in themselves but as objects for our use, such as issues of equality of access, any potential programming bias, or the privacy of the information they store. Although ChatGPT and other large language models may exhibit certain patterns in their responses that can help identify them as a machine—such as tone or consistency—they are far from easy to notice. A Stanford University study from last year confirms that ChatGPT did pass the Turing test, and the technology has only gotten better since.
What this means is that ChatGPT and similar AI have human-like intelligence in that they are not discernibly different to the naked eye. In other words, we may have crossed into the machines as subjects-era. By extension, they should be treated as an end themselves. According to the 2007 AI Magazine article, “Machine Ethics: Creating an Ethical Intelligent Agent,” treating AI as a subject means that ethical questions about them should be “concerned with ensuring that the behavior of machines toward human users, and perhaps other machines as well, is ethically acceptable.”
The ethical landscape here is about the things in themselves, how they behave, and how you act or relate to them, accounting for societal values, context, and logic. In other words, the ethics of human relationships. AI is bringing change in all areas of life. But is it a subject or an object? In subtle but significant ways, you can make a case for both. We can be sure it is not neutral. Only by solving this riddle can we deal with the difficult ethical questions that come with the technology.
Q: What is the Turing test?
A: The Turing test is a method for determining whether a machine can exhibit what humans understand as intelligence. It involves a human asking a series of questions to a hidden machine and another human. If the human cannot distinguish between the machine and the human, the machine passes the test.
Q: Why is the ethical use of AI important?
A: The ethical use of AI is crucial because it can have significant impacts on individuals and society. Ethical considerations ensure that AI is used responsibly, transparently, and in a way that respects human values and rights.
Q: What are some examples of AI being used unethically?
A: Examples of unethical AI use include students using chatbots to write papers, professors using AI to grade assignments without students' knowledge, and companies using AI to make decisions that disproportionately affect marginalized groups.
Q: How can we ensure AI is treated ethically?
A: To ensure AI is treated ethically, it is important to establish clear guidelines and regulations. This includes transparency in how AI is used, accountability for AI decisions, and ensuring that AI systems are fair, unbiased, and respect privacy.
Q: What is the difference between treating AI as a subject and an object?
A: Treating AI as a subject means recognizing it as an entity with its own agency and ethical considerations, similar to how we treat humans. Treating AI as an object means viewing it as a tool or means to an end, with ethical considerations focused on its use and impact.