IBM CEO Praises Real Open Source for Enterprise Gen AI as New Efforts Emerge at Think 2024

IBM CEO Praises Real Open Source for Enterprise Gen AI as New Efforts Emerge at Think 2024

IBM is strengthening its foundation for generative AI with a new set of technologies and partnerships announced at the Think 2024 conference. The company has a long history in AI, predating the modern hype era of generative AI. Last year, at Think 2023, IBM introduced its Watsonx genAI product platform, which provides organizations with enterprise-grade models, governance, and tools. Now, at Think 2024, IBM is making a series of its Granite models fully available as open-source code. These models range from 3 to 34 billion parameters and cover both code and language tasks. Additionally, IBM will incorporate Mistral AI models into its platform. IBM’s CEO, Arvind Krishna, believes that AI’s impact will be on par with historical technological shifts like the steam engine or the internet. The move toward open-source models is crucial, and IBM’s commitment to real open source matters for enterprises.

IBM CEO Arvind Krishna

This development reflects the growing importance of open-source pre-trained AI models, which empower businesses by combining them with private or real-time data to enhance productivity and cost-efficiency. IBM’s active contribution to open source AI models aligns with this trend and underscores the impact of open source in the AI world.

IBM THINK 2024 Conference

IBM Granite Enterprise AI: Building on Open Source Foundations

IBM unveiled its Granite models in September 2023 and has been expanding these offerings ever since. Among these models is a 20-billion-parameter base code model that powers IBM’s watsonx code assistant for Z, assisting organizations in modernizing outdated COBOL applications.

At Think 2024, IBM made headlines by officially releasing a selection of its most advanced Granite models under the open-source Apache license. It’s noteworthy that while many vendors claim to offer open models, few are genuinely licensed under an Open Source Initiative (OSI) approved license. According to IBM, only the Mistral and now Granite models are high-performance large language models (LLMs) available under a true open-source license like Apache.

IBM CEO Arvind Krishna emphasized the importance of genuine open-source licensing for enterprises. Krishna pointed out that unlike the Apache license, many so-called open licenses used by vendors don’t meet real open-source standards. His point was that most competitors are using the term ‘open’ for marketing purposes.

Expanding WatsonX Assistants to Boost Enterprise AI

While LLMs are crucial for enterprise generative AI, AI assistants also play a vital role. These assistants, referred to as copilots by companies like Microsoft and Salesforce, provide a user-friendly approach for many organizations to implement AI. During a media briefing, Rob Thomas, IBM’s Senior Vice President and Chief Commercial Officer, explained that AI assistants offer a more packaged solution for deploying AI in enterprises.

At Think 2024, IBM introduced three new assistants. The first is the Watsonx code assistant for Java, which aids developers in writing Java application code, leveraging IBM’s extensive experience with the language.

The second is the Watsonx assistant for Z, designed to help manage IBM’s mainframe system architecture, IBM Z. Thomas highlighted that this assistant focuses on assisting organizations in handling their IBM Z environments. The third new service is Watsonx Orchestrate, which enables enterprises to create their own assistants.

RAG, Vector Databases, and InstructLab

One of the prevalent deployment patterns for enterprise generative AI today is Retrieval Augmented Generation (RAG). RAG enhances assistants and AI chatbots with real enterprise data that an LLM wasn’t originally trained on. Central to RAG is the use of vector databases or vector support within existing databases. While RAG and vector databases are critical to modern enterprise AI, IBM has chosen not to develop its own vector database.

“In terms of vector databases, this is kind of like a flavor of the month thing, meaning I think there’s hundreds of options,” Thomas said. “We’ve done integrations with many and by definition, you have to have the capability in the platform, but we don’t feel like we need to own that capability.”

Despite the importance of RAG, IBM sees significant potential in the InstructLab technology recently announced by its Red Hat unit. InstructLab facilitates the continuous improvement of models through an optimized approach, indicating a promising direction for future developments.

In summary, IBM’s ongoing expansion and open-source commitment with Granite models, combined with its innovative WatsonX assistants and strategic focus on RAG and InstructLab, underscore its leadership in advancing enterprise AI. As AI technology continues to evolve, these initiatives highlight IBM’s role in shaping the future of enterprise AI solutions.

Comments

Leave a Reply

%d bloggers like this: