Generative AI. There is a good chance that this is something you have already heard of or used. ChatGPT, the best-known example of generative AI tools, amassed a user base of 100 million in a mere two months. By way of comparison: TikTok and Instagram took 9 and 30 months, respectively. “Write an in-depth, fascinating blog on generative AI, detailing its key benefits, potential risks, and methods for incorporating it into your workplace.” Below is the result of the prompt that our specialists used as a starting point!
Numerous businesses have barely finished overhauling their work rules and habits, and already they are faced with mastering a new wave of innovation. Artificial Intelligence (AI), the technology that attempts to emulate human actions and intellect using computers, is rapidly permeating nearly all sectors and becoming an essential component of tomorrow’s workplace.
Yet Artificial Intelligence is certainly nothing new under the sun. Speech and facial recognition are examples that many of us have encountered for years via our smartphones. Now more than ever, the emphasis is on generative AI, a specialised branch of AI technology that companies like Microsoft and OpenAI aim to use to revolutionise the way we work forever.
Microsoft compares generative AI to a digital assistant that, given clear instructions, can resolve issues in a swift, efficient, intelligent, and accurate manner. Despite what some people might dare to assume, AI is not actually alive and has no plans to take over the world.
By leveraging Large Language Models (LLMs), generative AI is highly proficient in language and can (swiftly) produce new content akin to solutions created by humans. So, you write prompts or assignments in ‘human language’ that AI tools like ChatGPT and Bing Chat (Enterprise) interpret and execute as well as possible. The idea of a digital assistant that drafts documents, processes and condenses data, or generates (unique) images is no longer a thing of the future.
While your generative AI assistant is busy preparing a report based on your Excel data, you can invest your valuable time in other tasks. Need inspiration to write texts or to brainstorm new ideas for a project? No problem. In addition, ChatGPT has already proven that it can write scripts or pieces of code and thus solve problems that are outside your technical comfort zone. However, the greatest strength of generative AI is not solely found in time savings, inspiration, and the discovery of new knowledge.
We have briefly referred to prompts above, these are assignments in our (own) language that we can use to set a digital assistant to work. To exploit the full potential of generative AI, prompts and outputs should be seen as communicating vessels. The quality of your final output is determined by the wording, precision, and quality of your prompts. “Write a blog about AI,” and, “Write an inspiring blog about AI that clarifies the latest developments and their impact on the workplace,” will therefore yield two completely different results.
Generative AI is largely about prompt engineering, a concept that presents a stark contrast to the way we’ve been articulating our search intentions to our devices for years. As an example, the searches and instructions we provide to our digital devices like smartphones and laptops today largely rely on keywords. While its usage will continue to be relevant for search queries, it is also particularly important to help employees in creating efficient prompts if they plan to capitalise on the wave of generative AI.
Generative AI clearly offers a lot of possibilities. But what are the flaws?
Let's first talk about security, the elephant in the room. Microsoft and OpenAI are actively developing generative AI applications suitable for the workplace, such as Microsoft 365 Copilot and Bing Chat Enterprise, while many other applications are not (yet) adapted for this context. So exercise caution with the input you provide to an AI model, and always verify where your data is stored and whether it’s being used to train the AI model.
In addition, generative AI relies heavily on the input it has available, both the data used to train the model and the quality of the prompts. If the quality of one of these two falls short, a generative AI application may sometimes yield an answer that is erroneous or partly made up, but comes across as correct. We also refer to this as ‘hallucinations. For example, ChatGPT has been trained on data up to 2021 and thus lacks about two years of information. So if you pose a current question, the model may indicate that it doesn’t have an answer, or it may provide a response that is probably correct or appears to be correct.
It is therefore important to always analyse the results critically enough and to filter out hallucinations as much as possible.
Popular applications such as ChatGPT are already widely used. However, the narrative shifts when we aim to utilise tools like Bing Chat Enterprise or Microsoft 365 Copilot with our company data. To prepare your company for this, it is best to give attention and time to the following:
We can't say it enough: the quality of your data is directly linked to the effectiveness of a generative AI model. In the context of your company, it mainly has to do with the structure and organisation of your data and the correctness of its content. Let us make this concrete with an example:
Your company is a producer of various basic products such as milk, water, grain, etc. As a sales manager, you would like to propose a potential customer and you request more information from your digital assistant to make an appropriate proposal for your prospect. If your data is distributed across multiple departments that don’t share information in a well-structured manner, you’ll inevitably miss a piece of the puzzle when searching for information. If the ‘milk‘ department does not share the data, you cannot find out whether the customer in question may already be purchasing milk from a competitor.
The same applies to the correctness of the data. Different datasets can confuse and produce erroneous results. If different departments or documents specify different contact persons to the customer, it is difficult to find out which contact person is the right one.
These are just a few simple examples to show that you should take a closer look at your dataset. It’s essential not only to verify the accuracy of your data, but also to know its whereabouts and the optimal methods for sharing it, to piece the information puzzle together as completely as possible.
In addition to organising your data, it is of course useful to teach your employees how to work with generative AI. Although many of us have already experimented, we already mentioned above that writing prompts is an art in itself. Taking a workshop or course on writing prompts or sharing success stories based on AI is certainly not a bad idea.
Provide an environment that encourages learning and, above all, creates awareness around what AI is, how it works and what it can do. Besides highlighting the positives, it is equally important to educate your employees on the responsible use of (generative) AI and to underscore the limitations and risks associated with the technology.
At Arxus, we are certainly thrilled to delve deeper and experiment with the realm of AI, all to offer our customers the best possible advice.