Navigating the AI ​​search wars. opportunities and risks

Generative AI and Large Language Models (LLM) have led to the most significant advances in artificial intelligence (AI), opening up a world of possibilities. However, like any technology, there are risks associated with its use, and AI has the potential to perpetuate bias and misinformation if not used carefully.

Microsoft and Google are the two biggest contenders in the AI ​​search race

Tech giants Microsoft Bing and Google are locked in an AI search arms race, with Bing introducing ChatGPT integration and Google introducing its new technology, Bard, for both consumer search use cases. Google has been experimenting with improving search results by serving up specific snippets for some time now, but there have been cases where the AI ​​model confidently answered biased or factually incorrect information as the best answer.

In 2020, Google search results incorrectly displayed a segment that claimed former US President Barack Obama was planning a coup. February saw the infamous Google Bard fiasco at the James Webb telescope, as well as the recent infamous conversations users had with the Bing AI chatbot. While corrective measures were quickly taken, these incidents highlight the potential of AI-generated responses to spread misinformation.

The value of AI Powered Search Chat

The financial stakes are high in this contest. Both Google and Microsoft have invested heavily in artificial intelligence over the past few years, but AI has suddenly become a major factor for stock market investors.

On February 6th, Google Bard made a spectacular debut with a demo of its ChatGPT competitor. Bard mistakenly claimed that the James Webb telescope took the “very first pictures” of an exoplanet outside our solar system. This was not true. The first photo of an exoplanet was taken by the European Southern Observatory’s Very Large Telescope in 2004, according to NASA.

The consequences of this public slip-up were immediate. Google’s parent company Alphabet ( Goog ) fell more than 7% in two days after the Google Bard factual error, wiping out more than $100 billion in market value.

OpenAI was recently valued at roughly $29 billion as it entered into talks to sell its existing stake to venture capital firms Thrive Capital and Founders Fund, The Wall Street Journal reported. This is double its estimated value in 2021.

The problem with Generative AI and Chatbot search

Instead of endlessly searching through a list of results, providing answers directly can give users an easier and improved search experience. While Generative AI can provide answers with confidence, they may not be factually correct or represent only one perspective of the source of the information. In the past, users would make decisions about the authenticity of information based on the source or citation, but this paradigm shift means that users must be able to distinguish between an AI hallucination and a real answer in order to properly use the correct information.

This question goes beyond factual questions as it applies to any survey that may yield a biased answer based on its source. For example, if a generative AI model is asked for the best political party, it may give a biased answer based on the political leanings of the source it was trained on. Although the answers from these may be correct 90% of the time, it is difficult to measure the impact of the remaining 10% of false information presented with confidence, and the average user may not have the necessary skills to distinguish between genuine and biased information. information.

Large language models are better for tasks where it is easy for humans to review, such as debugging and building code, and for tasks where truth is not critical, such as creative writing. LLMs are also incredibly suitable for tasks where there is a large amount of learning data readily available, such as text translation into another language and speech recognition. However, with the significant costs of building and maintaining these models, savvy tech giants have prioritized consumer search-first use cases due to the financial potential of ad revenue.

The value of LLMs in enterprise business search

The real value of Large Language Models lies in enterprise use cases. Enterprises have access to a wealth of information that can be easily mined through these new technologies for better efficiency, productivity and customer service. LLMs can easily help understand user queries in context and, if used correctly, can be of greater application to information retrieval needs.

Siri and Alexa, while popular virtual assistants, may not be true representations of truly “intelligent” virtual assistants. These devices are limited in their capabilities and primarily serve as command-based devices, primarily used for automated actions such as playing a song or launching an application. However, there are greater use cases for virtual assistants beyond basic commands.

With truly intelligent virtual assistants, consumers can expect to see more natural conversations and more reliable self-service automation for better user experiences. Intelligent virtual assistants are transforming the way we access information while improving the customer experience and simplifying self-service tasks for improved convenience and efficiency.

AI has the potential to greatly improve our lives, but it must be used with due care. As the AI ​​war between major tech companies continues, it is up to us as users to responsibly consume the information generated by these technologies and ensure that it is used for the right purposes.

You can create your own Smart Virtual Assistant using the Kore platform. Try it yourself.



Source link