Is it okay to use ChatGPT content as my own words?
The LinkedIn post that prompted me to write this article.
Some time ago, I came across a post criticizing the copying of ChatGPT content and pasting it as one’s own statements. At first glance, the issue may seem obvious. However, it’s not that straightforward. In this article, I present a different perspective, according to which copying ChatGPT content and using it, even without any modification, can be not only permissible but even advisable.
The role of AI in communication
It is worth noting at the outset that in the digital age, where artificial intelligence increasingly permeates our daily lives, it is a valid question to ask about the role of AI in shaping communication. What place should generative AI, like ChatGPT, have in the process of formulating our statements? Should it be seen as a threat to authenticity and individuality, or rather as a tool that supports and expands our communicative abilities?
This article attempts to find answers to the above question, while also addressing the criticism of using generative artificial intelligence to support one’s own statements.
AI supports argument formulation
If a conversation focuses on the exchange of arguments and aims to seek truth or meaning, it doesn’t matter whether the words used to formulate these arguments come from a human or are generated by artificial intelligence. In such a conversation, the value of a statement should not – and usually is not – measured based on the emotions or personal characteristics of the speaker. Instead, the substantive value of the presented arguments is key.
ChatGPT, utilizing the GPT-4 model, can be an invaluable tool in conducting substantive discussions. Although it is not perfect, in many aspects, it uses language much better than the average person. Thanks to its rapid access to a vast knowledge base – both about the world and the rules of language use – ChatGPT is able to instantly generate formulations that present a specific argument effectively: clearly, objectively, and easily comprehensible.
To be direct, when I’m in a conversation and lack the right words to express my thoughts, looking for arguments, examples, or comparisons, using ChatGPT is a very good idea. Contrary to the suggestion of some people, using ChatGPT content will not be a sign of disrespect or dishonesty. Quite the opposite. Out of respect for others and myself, I want my statements to be clear, factual, rational, and as objective as possible. ChatGPT is perfect for formulating such statements, especially if properly requested:
On LinkedIn, I came across the following post: “content of the post.”
I disagree with the thesis posed by the author of the post. I believe that XYZ. Please, prepare the content of my response to this post, presenting arguments that justify my view. Additionally, support these arguments with real examples.
If the ChatGPT-generated response is fully in line with my view and I do not feel the need to change it, then I do not need to forcefully modify this content just so that someone cannot accuse me of making a copy paste. The most important thing is my clear conscience. I was short of words, so I took help to express myself in the best possible way.
The exchange of arguments is like a judicial process.
A conversation in which we exchange arguments can be somewhat treated like a court trial. In both cases, whether in discussion or in court, parties strive to present their arguments in the most clear and convincing manner. Similar to a court trial, where lawyers regularly cite not only legal provisions but also opinions of other lawyers, mainly judges, whose interpretations of the law are found in the justifications of higher court rulings, in a discussion we can also use various sources to strengthen our arguments.
A person who, having the ability to independently formulate clear and effective arguments, chooses to copy content generated by artificial intelligence and present it as their own, should be considered lazy or possibly indisposed.
However, a person who genuinely lacks the right words to effectively express their argument and therefore turns to ChatGPT for help, acts intelligently. Such a person understands their limitations and overcomes them by using technology. This approach deserves praise, not criticism.
I consider myself someone with a flair for writing, likely thanks to my parents, for whom I have been typing legal arguments on a typewriter and later on a computer since a young age. Nonetheless, I too occasionally need support in finding the right arguments to justify my views, or in seeking the meaning of a statement. In these moments, I turn to ChatGPT for help. Whether I use the words generated by ChatGPT in my content, and whether I modify them first, depends on the specific situation. In cases of substantive discussion, where the strength of the argument is key, I see no problem in doing a copy-paste. As I indicated earlier, in such discussions, the content is what matters most, not the authorship.
Another example of my use of ChatGPT in discussions is conversations conducted in English, in which I sometimes feel less confident in formulating thoughts. Here, AI assistance is even more valuable, as ChatGPT is much better at languages foreign to me, and it handles English best of all.
Think before you use
Regardless of how I use ChatGPT, often its first response doesn’t precisely match what I have in mind. This is particularly true if I present imprecise questions or very broadly formulated problems or ideas. In such cases, I often need to ask additional questions or rephrase my thoughts to get a response that fully satisfies me. In my daily work, I frequently use only parts of the text generated by ChatGPT or just the ideas that come to my mind while reading the generated content.
Given the above, anyone who wants to copy ChatGPT content as their own should carefully read and consider whether they fully understand and accept it as their own. Copying and pasting such content thoughtlessly can lead to the opposite effect than intended – communicative chaos and exposing oneself to criticism or even ridicule.
Should I inform that I used content generated by AI?
I treat ChatGPT as a personal assistant. It is an extension of my intellect and my memory. I analyse its contents and consciously decide what is to be used, what is to be modified and what is not useful at all.
The only doubt I might have (but don’t) about the use of my assistant is whether I should report that the content has been generated by ChatGPT. In the context of a conversation focused on the exchange and analysis of arguments, such information is unnecessary.
As I have mentioned several times, what matters is the validity of the argument, not its source. It does not matter whether the argument was taken from a friend, a book, Wikipedia or ChatGPT. For the client of a law firm, it does not matter that a legal opinion was prepared by an assistant if there is a signature under that opinion by the head of the law firm confirming the reliability of that opinion.
As an aside, I would point out that I would take an even more liberal approach to the use of AI-generated content for purely informational communications. I am therefore referring to communications such as product and service descriptions, terms and conditions, contracts, educational and commercial material, etc. Here the authorship of the content is even more irrelevant.
Philosophical reflections - where does my thought begin?
Let’s start with the assertion that everything we say and think results from the fact that from the beginning of our lives, an image of the world and preferences are constantly being formulated and transformed in our brains. In a sense, everything we think and express is the result of our unique experiences and genes. However, I know that a thought is mine because it appeared in my head.
But what happens when my thoughts, sentences, and statements are formulated based on content that did not originally arise in my head? For example, a friend convinced me of his point of view. Then I present this opinion to another person, using exactly the same words that the friend used. I used the same words because I couldn’t express it any better. I adopted someone else’s statement and now treat it as my own. The thought is now mine, because that is how I now treat it. It doesn’t matter that it didn’t originate in my mind. I accepted it in my mind.
This process of exchanging and adapting thoughts can be seen as a continual interaction between the individual and collective aspects of our intellect. Every thought we have, though it may seem unique, is partly shaped by experiences we share with others.
Let me give another example. I came across an argument against censorship, disguised as a fight against disinformation. This argument struck me as very apt, and I wrote it down in my notebook. Later, I participated in a discussion where the topic was precisely the fight against disinformation. In the conversation, I presented the anti-censorship argument in the exact words I had written in my notebook. Someone might ask: was my statement really mine? Should I have made at least minor changes to the statement to defend myself against the accusation of copying others’ statements? Does my statement lose its value because it was not invented by me?
These considerations lead to a deeper question about the nature of intellectual property and authenticity in the information age. How can we define the ‘ownership’ of thoughts when so many of them are inspired, adapted, or even directly borrowed from others? Does a thought stop being mine if it was based on information obtained from another person? Does a statement stop being mine if its similarity to another statement I was inspired by exceeds 90%? In the information age, the source of information becomes less significant. The critical factor is the credibility of the information and its usefulness in a specific context.
Arguments and feelings
In this discussion, it is very important to distinguish between expressing feelings and exchanging arguments or emotionally neutral information (e.g., information about the content of regulations). Everything I have written about so far pertained to the exchange of arguments or the conveyance of emotionally neutral information. The situation is entirely different when it comes to expressing emotions.
This can be compared to the difference between a hard science and art. Exchanging arguments and conveying ‘dry’ information, similar to hard sciences, requires logic, coherence, clarity, and must be relevant to the problem being considered. On the other hand, expressing feelings, like art, does not have to follow logic or solve a specific problem; feelings just arise in us and manifest in various, sometimes incomprehensible ways.
I am not saying that ChatGPT is completely useless in the context of expressing feelings. Quite the contrary. The linguistic skills of language models like GPT are so good that they excel at creating compliments or diffusing negative emotions. Nevertheless, it seems to me that copying AI-generated content to express one’s own feelings makes very limited sense. Unlike argumentation, in this case, the content is less important, and my own emotion is key. The tightness of my stomach, nervousness, surprise, awe, or terror. These are things AI cannot yet feel.
Summary
In conclusion, I believe that in the cases presented above, copying ChatGPT content and using it as one’s own is permissible and does not require informing recipients.
When having a conversation, I don’t analyze where a person got the words they are speaking. I analyze whether those words make sense. If ChatGPT can help someone prepare a better statement, I encourage using such assistance. However, I recommend thoroughly reading the AI-generated content and considering whether it is well understood and whether one fully agrees with it. Copying content without such analysis may lead to exposure to criticism and ridicule.
The claim that using ChatGPT content as one’s own is deceiving others oversimplifies the issue and indicates a lack of deeper analysis or ignorance of its results.
The LinkedIn post titled ‘ENOUGH OF THIS SHAM ON LINKEDIN’, that provoked this article, could be compared to an image with the text ‘ENOUGH OF ACCIDENTS ON THE ROADS’ attached to an article in which someone states that people should stop driving cars because accidents happen.
This is not the way to go.
DISCOVER ELEMENT!
Maciej Michalewski
CEO @ Element. Recruitment Automation Software
Recent posts:
Is it okay to use ChatGPT content as my own words?
The article analyzes the use of ChatGPT content in communication, pondering the ethics and appropriateness of using AI-generated content as one’s own.
Letters of Intent and Preliminary Contracts in Recruitment Processes: A Comprehensive Guide
Explore key aspects of preliminary employment and B2B agreements in our guide. Learn about the essential information required, the rights and obligations arising from such contracts, and how to handle missing details. Our guide offers professional advice and tips to facilitate understanding of these important legal documents.
GDPR-Compliant Recruitment: A Comprehensive Guide to Sourcing & Candidate Verification
Dive into our in-depth tutorial on GDPR-compliant recruitment. Master best practices for sourcing, direct search, job advertisements, and candidate verification to ensure full compliance with GDPR standards.
Complete Guide: Applying GDPR in Recruitment & Direct Search
Complete guide: applying GDPR in recruitment & direct search About the guide: GDPR in recruitment This guide on the processing of candidates’ personal data during
RPO – Recruitment Process Outsourcing: definition, functions, benefits
Explore Recruitment Process Outsourcing (RPO). Learn its definition, functions, and benefits. See how RPO can enhance your hiring strategy.
How to Respond to Negative Anonymous Employee Reviews
How to Respond to Negative Anonymous Employee Reviews – a complete guide for employers, with real life examples and solutions.