How would you want to have an AI agent as your personal counselor?

One of the fascinating recent developments in ‘artificial intelligence’ is that of transformers. The first one that caught attention was BERT, released by Google AI in 2018, and soon followed several others, where each new transformer attempts to be bigger and better than the previous ones. One of the famous recent transformers is GPT-3, released by OpenAI in May 2020, causing some jaws to drop.

What are transformers and why is there such hype around them? These are very large neural networks, which have been trained on huge amounts of text data. Of course, there are many technical details, but if you are not a computer scientist, here is an attempt at providing some intuition. Suppose aliens are tracking us from space. They can collect huge amounts of textual data and they have algorithmic knowledge. Naturally, they cannot speak English (nor any other language used on earth) but they can let their algorithm process a lot of text. They use simple tricks to “learn” (or is it learn?) English – when they feed the text into their algorithm, they occasionally omit words, or they stop in the middle, and the algorithm has to guess the missing word.

It turns out that, although they have no idea what the words mean, the algorithms can do a very good job at guessing the missing words – this is based on the underlying statistics of human languages; given a sequence of words, the probability of the next word is far from uniform (or random) – some words are highly likely to appear, and most others would never appear in this context. Is this enough for the aliens to understand English? To communicate with us in English? Is this all there is to understanding language?

The verdict is still out, but some transformers have been able to reach fantastic achievements. The most famous transformer at the time of writing is GPT-3 from OpenAI. It has been trained on a huge amount of texts, including all of Wikipedia and a large amount of books. It has 175,000,000,000 parameters. Apparently Google has built an even larger transformer with 1,600,000,000,000 (1.6 Trillion) parameters, but little is released about it so far. Arguably, you can think of parameters roughly as synapses in the human brain; for comparison the brain has 10^15 synapses, i.e., the brain is still approximately 1000 times larger in this comparison.

GPT-3 has been able to write short stories, news stories, and even learn to program web pages (which was unplanned and unexpected). It can also be used as a dialogue agent (“chatbot”), so you can talk to it. Now let’s assume it generally makes sense when we communicate with it – does it ‘understand’ human language? If the aliens were to communicate with us in a similar way – will we agree that the aliens mentioned above understand human language? This is an old philosophical debate, but for the first time it may become practically relevant soon (and not because I assume the aliens are coming).

The question we ask in the context of the Socrates project is: given the impressive success of such transformer models – can we use them as personal coaches, or personal counselors, to help us with the daunting task of behavioral change? Can we expect them to respond on point and motivate us to adhere to our lifestyle changes?

In order to start finding out, we took the first sentence from a previous blog, written by a human expert, and fed it as a trigger to GPT-2 (the smaller brother of GPT-3). Here are some results:

Making lifestyle changes isn’t easy. The truth is, many of us find that we can’t afford to make lifestyle changes and that our lives are just as miserable as before. In fact, according to the Centers for Disease Control, obesity and diabetes are the two leading causes of death in the U.S. We need to get away from the notion that we need to be thin to be happy. For many people, the most difficult thing about the weight we’re trying to lose is that it feels like it

Making lifestyle changes isn’t easy, but it can be done. The good news is that there are many lifestyle changes that can help you improve your health and your weight. Lifestyle changes that can help you lose weight and keep it off 1. Eat less than you think you should If you eat too many calories, you are going to gain weight. If you eat too little, you are going to lose weight. If you eat too much, you will probably gain weight.

Making lifestyle changes isn’t easy. Your partner might not be supportive of your lifestyle changes. Your partner may be worried about the consequences of your lifestyle changes. How to take care of your diet Your partner may be supportive of your lifestyle changes. Your partner might be worried about the consequences of your lifestyle changes. Ask your partner to help you make lifestyle changes. You might try to find a diet plan that is easy to follow. This might mean that you may have to make some changes to your

On closer inspection you may find some unintelligent behaviors in the texts above, but they seem like a great start. Next, in order to turn the transformer into a personal counselor, it should be able to engage in a dialogue with the user – rather than just sprouting text it should answer in short responses that are in context. Thus, we tested GPT-2 trained specifically to engage in dialogues; here are some examples:

The last three examples, especially, demonstrate one of the main problems. While the dialogue is not too bad in terms of human conversation, it is not exactly what we would expect from a professional, helpful personal counselor! Others have reported similar problems, such a medical GPT-3-based chatbot suggesting a (fortunately fake) patient to take their own lives..!

And here is what happened when we tried to chat to our agent about Covid 19. Since the pandemic emerged after the training of the original model, we have fine tuned the model with a collection of Covid related tweets (from the early months of the pandemic). Perhaps not too surprisingly, our ‘friend’ turned out to be a fan of ‘fake news’:

So how did we find ourselves in this position? In the past (e.g., 1980s), “AI” used to be knowledge-based. One of the approaches was that of so-called expert systems, where human expert knowledge was ‘manually’ formalized into formal knowledge representations. This approach is still useful for small domains, but one of the main reasons it did not succeed is scalability; it is just too resource intensive to formalize human expert knowledge to the level of details accessible by machines.

As a result, the ‘AI’ community moved into machine learning approaches, or data-driven AI. With the advance of the Web, and increase in computer processing power, we are now able to train ‘beasts’ such as GPT-3, processing huge amounts of texts and crunching them with billions of parameters. This ‘deep learning’ revolution has indeed led to breakthroughs in machine vision, speech understanding, and language processing. However, here I demonstrated one of the main drawbacks of this approach, which is referred to by engineers as ‘garbage in, garbage out’. Rather than learning from experts, the ‘AI’ now learns from the masses. However, very often, this exposes the dubious reliability of the masses..

About the author

Prof. Doron Friedman is an Associate Prof. in the Sammy Ofer School of Communications and the head of the Advanced Reality Lab (http://arl.idc.ac.il). His PhD thesis, carried out in Computer Science in Tel Aviv University, studied automated cinematography in virtual environments. He continued as a Post Doctoral Research Fellow in the Virtual Environments and Computer Graphics Lab at University College London, working on controlling highly immersive virtual reality by thought, using brain-computer interfaces. In addition to his academic research Doron has also worked as an entrepreneur in the areas of intelligent systems and Internet, he and his lab members often work closely with start up companies, and his inventions are the basis of several patents and commercial products.
Since 2008 Prof. Friedman and his lab members were involved in several national and international projects in the areas of telepresence, virtual reality, and advanced human computer interfaces. Doron’s research is highly multi-disciplinary, and has been published in top scientific journals and peer reviewed conferences.