Google has recently announced a significant improvement in Bard, its AI chatbot, through a new technique called implicit code execution. Bard’s ability to tackle mathematical tasks, coding questions, and string manipulations has been enhanced. Alongside this, a new feature to export data directly to Google Sheets has also been added.

Bard, which is a large language model, uses a technique known as implicit code execution to improve its logical and mathematical abilities. The method allows Bard to detect computational prompts and run code in the background, significantly increasing its response accuracy.

More Advanced Reasoning

Bard’s new capability is a significant step forward in how chatbots and conversational AI can handle more complex queries, making it a crucial part of the AI revolution in various fields, including education, finance, and customer service.

To understand how it works, one must first understand the functioning of large language models (LLMs). When presented with a prompt, LLMs generate responses by predicting the most likely next words. This trait makes them excellent at handling language and creative tasks, but they struggle in areas requiring more reasoning and logic.

To address this weakness, Google’s team has drawn inspiration from human intelligence concepts found in Daniel Kahneman’s book “Thinking, Fast and Slow.” The book discusses the dichotomy between “System 1” and “System 2” thinking. System 1 thinking is intuitive and fast while System 2 thinking is deliberate and slow. Google’s team has combined these two forms of thinking to advance Bard’s capabilities.

A Fusion of Systems

LLMs, like Bard, usually operate under System 1 – they quickly generate text without deep thought. This approach can lead to surprising shortcomings, especially in complex mathematical or logic-based scenarios.

The latest update on Bard incorporates aspects of System 2 thinking into its operations. Through the implicit code execution, Bard identifies prompts requiring logical reasoning, generates the necessary code, executes it, and uses the results to produce a more accurate response. This integration of LLMs and traditional code has improved Bard’s accuracy by about 30% in internal testing.

Comparison with ChatGPT

To further understand Bard’s improvements, it is worth comparing it to other models such as OpenAI’s ChatGPT. Bard, built on the LaMDA language model, and ChatGPT, based on the GPT-4 language model, are both transformer-based models but have different strengths.

Bard is trained on a larger dataset encompassing text and code, while ChatGPT relies solely on a text dataset. This difference means Bard can access a wider range of information, providing it an advantage in answering queries and generating creative content. Furthermore, Bard’s LaMDA architecture is specifically designed for dialogue applications, making it possibly superior in natural language understanding and response.

In terms of availability, Bard is currently in beta while ChatGPT is available for purchase. Overall, both models offer wide-ranging capabilities, but the recent advancements position Bard as a highly competent contender in the realm of conversational AI.

Despite the advancements, Bard, like all AI models, is not infallible. There may be instances where it fails to generate the correct code or does not include the executed code in its response. However, this leap in AI’s logical and reasoning capabilities marks a significant step towards creating more helpful and efficient AI systems. As Bard continues to improve, the future of conversational AI seems more promising than ever.