Wat AI can do

What AI can do and how your company can benefit

Building blocks for an AI strategy in corporate communications. Everything we thought we knew about AI has changed.

Until recently, “artificial intelligence” in corporate communications outside the IT expert forums was little more than a buzzword to put a positive spin on a customer’s product as soon as it included even one algorithm. AI was found primarily in “smart” consumer products that featured what appeared to be particularly intelligent functions.

This slowly began to change when, for example, chat bots were used increasingly on websites to communicate with customers. Results have been mixed. Chat bots are designed to provide additional service and relieve call centers, sales and support staff. The quality of this service is expected to continue rapidly improving, especially for narrowly defined tasks.

For years now, special service providers have also been making AI applications that are beneficial for marketing and PR, especially in monitoring and evaluating measures with a far reach on the web. This includes measuring search engine-related online marketing with clear KPIs (Google Analytics) and the success of media presence, where the tonality of media reports is also analyzed automatically.

The age of trial and error

However, the true state of artificial intelligence has progressed rapidly and visibly for many since projects such as OpenAI have been made available to the public. Internet users can now join in the dialog with artificial intelligence on numerous platforms, free of charge. In German-speaking countries, the best known of these platforms are ChatGPT for text generation and DALL-E 2 for the text-based generation of graphics. OpenAI is based on selected, closed training material that is only as recent as 2021. While many people have yet to experience these AI, the discussions are already making waves elsewhere. In January 2023, it was announced that after investing a billion dollars in OpenAI, Microsoft is now likely to integrate the chat module into Microsoft Office  and invest another 10 billion dollars .

The truly astonishing output of intelligent computers strikes at the foundation of the human self-image: What is the difference anymore between human and machine “thinking”? Columnists in newspapers and magazines ask whether supercomputers can have a soul, as suggested by Blake Lemoine, a software expert who was recently fired by Google . Politicians might yet try to impose restrictions on the highly equipped intelligence. There is also criticism of the fact that the basic models for these AI are mainly based on US and Chinese data and thus might be ethically and legally questionable, or at least inappropriate, in the European context .

At this point, however, it is necessary to address the question of whether AI will also shake the foundations of corporate communications if it continues to develop at a rapid pace. Implementing AI is generally assumed to increase innovation, efficiency and speed. But how does it work? Where can we already see the first concrete effects? And where is it all heading?

I’m an astronaut on Instagram

The topic became widely visible on Instagram last fall when countless users were posting selfies that had been enhanced with the app Lensa. People were depicting themselves as astronauts and superheroes. All of this unfolded despite serious concerns that have come up in recent years regarding fake news and deep-fake videos. We have jumped to the next level in the game of editing reality, a game practically built in to Instagram thanks to the filters. But this can entail legal trouble as some AI image tools let users create images in the style of individual artists from comics, manga, and gaming. These artists are justified in asking whether copyright is being deliberately circumvented here through the anonymized mass evaluation of the graphic material available worldwide. Artists strive their entire careers to create distinctive styles, then get to watch others replicate their achievements—even adapt them by machine and reuse them—free of charge and at the push of a button.

This might explain why the great enthusiasm for such AI usually comes from those who are not personally affected. It would also explain the very direct—and very entertaining—opinion musician Nick Cave shares regarding a song lyric that ChatGPT created “in the style of Nick Cave.” His verdict: “This song is bullshit, a grotesque mockery of what it is to be human.”

Advances in automated text

When it comes to the written word, translators and language service providers were the first who needed to adapt. But the change has not been as drastic as one might have expected, at least in these industries. The quality of Google Translate is improving day by day, but still the often-inadequate results hardly suffice even for internal mails and Slack chats. The translations are more amusing than applicable, especially for anyone with an eye and ear trained to the language in question. Even the translation service from the Cologne-based company DeepL—arguably the most powerful AI translation engine currently available—delivers great target texts for users who want to, say, quickly send a response to an international customer. Anyone looking to sell the results still needs to work with a qualified native speaker to review the translation, a process known as post-editing. Professional translation agencies are relying only to some extent on the inertia of the masses. Many of them provide services that incorporate the products from Google and DeepL, using the tools to provide the customer the highest quality with the least effort, and that in a manner that complies with data protection laws.

Rewritten or auto-written?

It has long been considered sensible in content marketing and SEO to run endless blog articles on helpful or entertaining topics that also boost your own business. This supposedly improves Google rankings and brings new customers to the site. However, the degree to which the texts were innovative was a secondary concern. Advice articles were prescribed a certain length and keyword density but were not necessarily meant to be actually read at all. Machine creation would be optimal for texts like these. AI is the pinnacle of the evolution of the “reformulation tools” already in widespread use today, which reconstruct texts (possibly stolen from elsewhere?) first lexically then syntactically and paraphrase to disguise the degree of plagiarism.

But the other side in this battle is gaining its own kind of intelligence: Google is getting better at recognizing the value of a text for a real reader and penalizes pages that were written primarily for the search engine. You could call it a case of dueling AIs, one where it can be assumed that Google, with its own allegedly extremely advanced AI programs, is also setting the rules.

The danger for corporate wording

Services such as DeepL and ChatGPT have a key disadvantage when used by enterprises on a large scale: The user’s own input that they feed to the machine flows into the total inventory and is used for the AI’s continued training. This presents a real danger that a company’s own corporate wording, developed over years to market that specific company’s products and services, and even the industry expertise itself, will be generalized and used by others. Few companies recognize this general corporate knowledge as an asset in its own right and end up squandering it when using free AI services. Competitors, customers, and journalists who research well enough can adopt the output without being asked, just as AI users can adopt the special drawing style of an artist to create a new image. The company itself is then no longer needed. Using these services also makes it difficult to ensure that the target text retains the corporate language. Preferred terminology slips through the cracks while forbidden words sneak in. Someone still needs to read through the results, whether that is an internal expert or an agency. An additional step still needs to be taken to preserve the corporate wording that has been curated over many years.

Premium intelligence to protect intellectual property

One way to avoid the corporate wording conundrum is by using the services’ premium accounts. Here, the user input does not flow into the total stored memory. The company trains the AI only on its own content, so the output becomes more accurate and useful over time. This allows the company to truly benefit from its own wealth of experience and to turn it into more content. The larger the content volumes, the greater the benefit. Newly created texts correctly represent the industry situation, the product world, the technical details from the first draft. Modern content providers such as translation and copywriting agencies handle this service for their clients in an experienced and coordinated manner as a way of securing their own future. Eventually, they will become the specialists in efficient artificial intelligence conditioning.

Practical applications of artificial text

So, how far along is AI when it comes to generating text? The first thing that impresses the user is the immense range of possible applications. Modern AI can write anything from sonnets to social media posts to working lines of code. In journalism, machine texts are already used to analyze financial data or match reports in sports. For testing purposes, we requested an article on new opportunities for software developers in the automotive industry in light of the software-defined vehicle. Without training the AI, we got a perfectly respectable result. AI works particularly well in areas where numerous similar texts already exist, like texts that are subject to strict structuring; texts that have to follow clear, factual guidelines; or for concise utility texts. The AI can only generate in line with what the programmers have created and what the learning material provides, but it does so in an impressive, highly differentiated manner. Just look to the internet for countless creative examples of ChatGPT use. The application designs interview questions, writes social media threads, gives dating tips, provides ideas for finding product names, designs training plans for runners, and writes lines of code for developers .

Quality is determined by the input

The quality of the output earns hymn-like praise in many circles. So far, though, we are missing a critical analysis of how the best results of AI compare to really well thought-out, well-founded texts by a human author. Our expectation is that AI can write very good standard text for mass consumption, but will not provide any substantive new ideas in a discourse. The precise limits of what can be achieved when long training is combined with creative ideas from users may be far from clear, but it is no longer foolish to wonder when artificial intelligence will leave human intelligence behind.

Limits and potential for error

This is also one of the great dangers of using AI: We are not dealing with equal opponents we can reason with. Anyone who assumes this has already fallen into the trap. We cannot transfer our human mechanisms of evaluating texts onto artificial “intelligence.” ChatGPT can write a serviceable limerick with the given rhyme scheme a-a-b-b-a, but suddenly rhyme “gladly” with “boldly” because the artificial brain does not know which syllables need to sound the same. One can ask for an opinion on the historical matter of whether the Hohenzollerns would have supported the seizure of power by the Nazis in Germany, but the result is blatantly wrong. With a definitive “no,” the AI represents a position that has just been refuted by a scientific publication . And it mixes in falsehoods that you wouldn’t expect from an entity that has virtually all of the world’s knowledge at its disposal: According to ChatGPT, Wilhelm II had left Germany in exile when Hitler came to power. It has skipped over the timespan from 1918 and 1933, or the entire Weimar Republic. Again and again, such hair-raising and—if uncorrected—egregious mistakes are reported. Der Spiegel writer Jonathan Stock recounts that attempts to have ChatGPT treat mental illness led to it advising suicide .

Ethical implications of machine intelligence

Such mistakes may become less frequent with more training. After all, making AI available to the general public further refines its performance. AI will learn at breakneck speed. However, we are not subject to the machine when we use it; rather, we are subject to the bias of the programmers. Marginalized groups have long complained that they are disadvantaged in AI. Racial biases are built into systems, and previous training data suffered from a lack of diversity. The MIT Media Lab, for example, tested facial recognition software from Microsoft, IBM and Chinese company Face++ . It was discovered that all systems recognized the sex of light-skinned males with an error rate of 0.3 percent. However, the three systems misclassified dark-skinned males in 6 percent of cases, and women with darker skin color were misclassified as must as 30.3 percent of the time. When generating made-up selfies for social networks, one wonders which stereotypes are informing the images created. Some users have already complained about unwanted sexualization of the output. There is now plenty of literature on the potential misuse of AI.

AI needs a smart curator

For all the ecstasy surrounding AI, the errors described are sometimes noticed immediately, but often only when someone carefully checks the results. This is seen at every level: logic, content, and form. Sometimes, when dealing with artificial intelligence, one has the feeling that the counterpart is in a dream and obeys its own laws. In the AI-generated selfies, this is evident when some of the faces have empty eye sockets or hands have eight fingers, both of which are depicted repeatedly. Similar logical errors also occur in texts, where they are much more difficult to detect. Mastodon user “The Skeptator” wrote: “What you can learn from ChatGPT: Appear convincing while spouting complete nonsense. Dunning Kruger as a Service.“

In corporate communications, therefore, the utmost caution is called for. It is a mistake to think that after a little trial and error you can just let the AI write the texts for you. This requires a well-considered strategy that determines which messages, tonality, and content are desired, as well as what stance one takes on the ethically problematic aspects of AI. In addition, many legal issues such as copyright are far from settled.

After an elaborate training phase in a closed system, experts can then take over proofreading to identify possible errors and in turn improve the input accordingly. Complex control processes are at work here, which does not make human beings superfluous. In the beginning, humans will have to put in even more effort. Later, they must take on a role as a kind of curator who steers the control loop. The use of artificial intelligence will always require human thinking as well. This can ultimately lead to more efficiency and speed in content creation, but it doesn’t have to. Perhaps it also requires a disproportionate effort on a permanent basis to manage the results and keep the risks under control. The end has yet to be written.

This blog was graciously provided by one of our IPRN / PR partners – German tech agency TDUB – of which we are a great fan! 

Authors:

Tilo Timmermann, co-founder and managing director of the technology PR agency TDUB Kommunikationsberatung in Hamburg, Germany

Christina Wöhlke, founder and managing director of wordinc, a language services and translation agency in Hamburg, Germany