What does AI ChatGPT mean for the future of writing?

How will AI chatting tools change the future of writing?

AI chat tools have the potential to change the way we type by making it more efficient and accurate. For example, they can help check spelling and grammar, suggest synonyms, and even generate entire pieces of text. This technology could also make writing more accessible to people who struggle with language, such as those learning a new language or those with certain disabilities.

However, it is important to note that the use of AI in writing also raises ethical concerns about the authenticity and originality of the work being produced.

Full disclosure: The above two paragraphs were written by the AI ​​chat tool chat. The tool was developed by software company OpenAI – the people who graced the world DALL-E, an AI tool that went viral last year for its ability to convert text into images. scary pictures.

ChatGPT was released to general access on November 30th. Opening ChatGPT, the webpage indicated that the tool would be able to interact with users in a “conversational manner”. Among ChatGPT’s talents are that it: “…can answer follow-up questions, admit errors, challenge incorrect hypotheses, and reject inappropriate requests.”


Read more: The Art of AI: Evidence that AI is Creative or Not?


So how does it work?

By taking huge amounts of data, the tool aggregates information based on user prompts.

It may sound easy, but it’s far too easy for an AI to simply spread nonsense without proper “training”. ChatGPT has made headlines and caused a stir online precisely because it actually looks pretty human. Maybe human too?

For comparison, there are other AI tools that allow you to manipulate some of their parameters – this gives insight into what’s going on behind the scenes.

Example InverketIt was created by Canadian developer Adam King. Inferkit allows you to “dial” the sampling temperature which further randomizes the text. The result is often funny.

ChatGPT is based on what is known as the Generative Pre-Trained (GPT) architecture. This basically means that the software uses deep learning algorithms to analyze and generate text. The model uses vast amounts of data from the internet to “understand” the nuances of natural language, as produced by humans.

It analyzes the input text by breaking it down into smaller components such as words or short phrases. Breaking everything together, ChatGPT reveals its responsiveness.

What sets ChatGPT apart is that it is trained via reinforcement learning from human feedback (RLHF).

Human AI trainers fine-tuned the ChatGPT prototype by “playing on both sides” of the interaction and giving the AI ​​feedback on which of its responses was most appropriate. The “reward” model trains the AI ​​to recognize when it produces human-sounding responses.

The developers note some snags in the model, including sometimes illogical responses, high tweak sensitivity in the input statement, redundancy, assumptions made when prompts are not clear, and responsiveness to inappropriate requests, including those with instructions harmful or that lead to biased behavior in AI.


Read more: Promises and Problems: How do we apply AI in clinical settings and ensure patient safety?


The potential for abuse is among the most serious public concerns about ChatGPT and other AI programs for language processing.

In academic writing, for example, a student might use ChatGPT or an equivalent tool to produce an essay. The student didn’t do the work and basically plagiarized. But that would be very difficult to prove, because ChatGPT is specifically designed to look like a human.

Edward Tian, ​​a 22-year-old student at Princeton University, has it Application development Which claims it can tell “quickly and efficiently” whether ChatGPT is the author of a student’s article.

Open AI partners with Georgetown University’s Center for Security and Emerging Technology and the Stanford Internet Observatory, to compile a report focusing on potential misuse of AI language programs as well as possible mitigation methods.

Far more concerning than assigning stolen university homework is the potential use of AI language tools to spread disinformation.

“We believe it is critical to analyze the threat of AI-assisted impact operations and identify steps that can be taken,” read a statement posted on the OpenAI website. Before Language models for influence processes are widely used. We hope our research will inform policymakers of what is new in the areas of AI or disinformation, and stimulate in-depth research into potential mitigation strategies for AI developers, policymakers, and disinformation researchers.”

For now, it appears that artificial intelligence is here to stay – whether we like it or not.

It certainly has its uses. As my learner friend, ChatGPT, said at the beginning of this article, it can potentially help with typing as well as making the language more accessible. But it seems that legislation should catch up with the rapidly evolving technology to avoid misuse.

In the meantime, you can rest assured that I wrote this article on my own. Or did you?



Leave a Comment