Is ChatGPT a blessing in disguise or a disaster in the making?
2023-02-03 12:07:37"ChatGPT was created by OpenAI, a global non-profit organization focused on researching and developing artificial intelligence. It is a language model based on the GPT-3 (Generative Pre-trained Transformer 3) machine learning algorithm. ChatGPT was trained on large text collections from around the world to answer various questions from different fields. Its ability to analyze and process natural language allows it to provide answers in a manner as human-like as possible.
OpenAI continues its research and development of the ChatGPT model to provide increasingly advanced and efficient functionality. To achieve this, the OpenAI team focuses on several main areas:
- Improving answer quality: OpenAI is working to improve the accuracy and contextual relevance of answers so that the model can better respond to questions and follow conversations.
- Expanding the scope of applications: OpenAI is working to expand the capabilities of the ChatGPT model to be used for a wide range of applications such as text generation, translation, and speech recognition.
- Improving performance: OpenAI aims to increase the performance of the ChatGPT model so that it can respond to questions faster and more effectively.
- Integrating with other technologies: OpenAI is working on integrating the ChatGPT model with other technologies, such as artificial intelligence and machine learning, to provide even more advanced functionality.
The goal of OpenAI is to continually improve and develop the ChatGPT model to better meet the needs of its users and provide even more advanced services."
I’m ChatGPT
The AI system that has been hitting the headlines for several weeks now, ChatGPT, as it calls itself, has become as intriguing as it is concerning. A major reason for this is the uncanny manner in which it communicates — using human-like writing that is clear, precise, and professional.
Paradoxically, it is the victim of its own success — it is not always accessible during the day due to its popularity and the consequent burden. Despite the global success, it generates enormous costs (each query costs $0.11 in infrastructure, maintenance, programmer costs, etc.).
Let’s consider whether we should give in to optimism or come down on the side of skepticism. Maybe we should instead be afraid? Are we witnesses to a revolution, evolution, or perhaps an entirely different process?
Will artificial intelligence take away programmer jobs?
“Language models like ChatGPT are tools, not replacements for programmers. They can support and ease the work of programmers by automating repetitive and simple tasks like generating documentation or answering frequently asked questions. At the same time, language models are not capable of replacing the creativity and skills of programmers, such as designing and implementing complex solutions.
In reality, language models like ChatGPT can be an additional tool for programmers and help them perform their tasks faster and more efficiently. This may lead to the development of new, more advanced technologies and applications.”
Is it really human-like?
When asking such a tool a question, we will get a precise and literal answer. It will probably not be tinged with the linguistic ingenuity of human writers, much less artists. We should not expect creativity, but it will find the answer to our burning questions somewhere in the depths of the internet. Undoubtedly, the first steps will be the most challenging in the future.
Currently, anyone can learn the basics (during training or independently), find a job where they will use these basics, and then receive increasingly complex tasks while learning various methods of solving them and expanding their knowledge. After years of practice, they will eventually become an expert. At this point, the biggest problem is: “after years”. Why hire someone “to train” them when the same (only much cheaper, or even for free) will be done for us by AI?
One can then focus on hiring only people with extensive experience (seniors). However, the issue with this approach is that if everyone thought this way, we would eventually run out of skilled professionals. Those working now would leave the industry in the end, and the next generation would not be there because no one wanted to train them on account efficiency. And the vicious circle closes. Of course, as we said earlier, the basics can be learned at university, but “the strongest steel is forged by the fires of hell”. Even the best school will not teach you what you can learn from practice.
Self-development
Let’s go a few steps further and think about the automatic updating of such software. The creators will probably conclude that the tool can solve their problems independently, so it will — search the internet and find the best solution. Then it will implement it, check the results through quantified indicators, and continue to improve.
We do not know where this will lead, as we cannot even begin to simulate such a process. If we could — then this process would be unnecessary. If the algorithm of artificial intelligence could improve itself, we would then talk about the emergence of the so-called technological singularity. At that point, the development of the tool would be uncontrollable, and its outcome would be unknown.
Before the development mentioned above takes place, it is worth noting that such a tool can be used to find answers that we do not expect.
For example: if we want to design a vehicle in an optimal shape from the aerodynamics point of view, we have tools for this, and we know what we will get — a streamlined-shaped vehicle. However, if we seek answers to specific issues, for which we will only provide initial and boundary conditions and then ask for calculations, we cannot predict the results. Such a task occurs, for example, in the “game of life”, which is an example of a cellular automaton.
It is also worth noting that the amount and complexity of calculations in such an automaton are unfathomably large to the extent that even if we had the results “from a human point of view”, we would not be able to analyze them. We simply wouldn’t have enough time. As of now, there is room for research, development, and discoveries that we need to be made aware of.
Search engines
Let’s go further and consider the things we use on a daily basis to find information — search engines. The result page is usually a set of links. We browse them, ascribe value to them in some way, and if we find something that seems attractive at first glance, we delve into the subject and evaluate its usefulness.
AI can reduce all of the above steps down to just one: asking a question. In response, we receive an answer, and artificial intelligence assesses sources’ credibility, accuracy, and quality. Many believe that this is why Microsoft recently acquired OpenAI — to refine its search engine Bing, then replace it with artificial intelligence.
On the face of it, this is a great solution, but again we are missing something. With the above example, we will get a result that we cannot verify. What if it is incorrect? What if it is falsified or, just as likely, deliberately planted to mislead users? There will be no way of using the “ordinary” search engine in such a profound and analytical way because it may not be available to us in the future.
Should we be afraid?
Before all the things mentioned above come to pass if ever, the process will still take a while. Will it be years or decades? Only time will tell. Today, we can say one thing for sure — AI is accelerating at a blistering speed, and it’s only a matter of time before it changes our behaviors and skills. Will it revolutionize our lives?
Most likely, but in the beginning, it will evolve by replacing repetitive and tedious tasks (so-called “white collar” jobs). We can probably expect workers in information-related jobs (e.g., in an office), scheduling appointments (e.g., with a doctor), or compiling information (e.g., writing a blog) to be replaced first. On the other hand, craftsmen and artisans (e.g., carpenters, plumbers, smiths) don’t need to worry.
P.S. The text uses answers given by ChatGPT. At the beginning of the text, it is marked and later incorporated into the article. Can you distinguish human creation from machine-generated content?