The Advent of Generative AI: From Spam Filters to Tailored Experiences
The current clout of artificial intelligence speaks to the growing awareness of the technology, but the advent of Artificial Intelligence (AI) goes back as long as computers have existed. AI is an all-encompassing field in computer science concerned with the creation of software that can perform tasks that typically require human intervention. Consumer software has long relied on AI to automate processes. One of the earliest applications of AI in consumer software is spam email detection, where a machine learning classification algorithm, trained on a vast amount of emails, labels scam or unsolicited emails as spam so they wouldn’t clutter your inbox.
Continuous breakthroughs in AI grew the field beyond simple rule-based machine learning algorithms and gave rise to deep learning, a branch of machine learning that’s powered by neural networks to automate more complex tasks like image and speech recognition, technologies we encounter in the likes of Google Photos and virtual assistants like Siri. When ChatGPT debuted in late 2022, it seemed novel enough to warrant the attention, but it depicts yet another advancement in the field of AI, one that is built on transformer models. ChatGPT signified the birth of generative AI, where large language models (LLMs) are capable of generating text, images and other content based on large training datasets.
Generative AI is a young, yet fast-evolving field, with incumbents and new entrants operating and deploying resources in the space. We view the generative AI landscape as a connected value chain, with the first level housing the foundation players collecting data and training LLMs. Foundation players include OpenAI, Cohere, Anthropic, amongst others. Foundation players can also be building for certain industries, developing vertical LLMs with deeper domain knowledge. An example of a vertical LLM is BloombergGPT, built by Bloomberg for the financial industry.
A step down the value chain are the Enablers, a group of companies building developer-first solutions for building LLM-powered enterprise and consumer applications. Langchain is a prominent example of an enabler, empowering developers to unlock the capabilities of multiple LLMs within their code base. We are most excited about the new crop of software companies leveraging generative AI to create new customer experiences and reimagine how we engage with software.
We call these companies the UX Disruptors, a group of generative AI-enabled companies leveraging LLMs to solve for several customer pain points. Decision paralysis is a phenomenon synonymous with data-driven enterprise software and we expect UX disruptors to reimagine a world where enterprise software is less reliant on dashboards and where personalized insights and decision recommendations are more prominent. We also envision UX disruptors to solve discoverability, whether on the consumer or the enterprise side. This entails software that does away with navigation pages and links to files and reports. UX disruptors will be less reliant on the typical UI patterns that encourage navigation to different corners of the software before getting to the information sought, it would rather float the generated content for the user at the homepage, saving valuable time. We’re already seeing early adoptions of these, one that stands out is Perplexity, originally a search engine, is experimenting with UI elements that serve curated and summarized news stories from across the web without getting lost in web pages.
We’re excited about all the opportunities that generative AI presents in the Middle East. While general purpose LLMs like GPT have demonstrated the capabilities of the technology, we think there’s ample room for local startups building local models with data as a moat. Training LLMs on local datasets with a better representation of the Arabic language and its various dialects will enable the creation of generative AI-enabled software catered for an Arabic-speaking population. But while Arabic is the native language for over 400M people in the region, less than 1% of webpages are displayed in Arabic. This makes data collection for model training a laborious process and more reliant on offline resources including books, magazines and other private documents that can be copyrighted and not freely available to use as training data.
The speed of development in the generative AI space makes us excited about what’s to come. We anticipate the coming years to witness a shift in economic productivity as more enterprises move to utilize the technology in their daily processes and across different functions. Klarna, the Swedish BNPL company, announced that its OpenAI-powered chatbot already handles two thirds of customer service chats. The chatbot helps customers with their shopping and can process refunds and cancellation requests.
While there are current cost limitations, we anticipate acceleration of chip production and advancements on the software side to lower the costs associated with adopting and maintaining generative AI and increase the uptake amongst individuals and enterprises. We believe the trajectory of generative AI adoption is analogous to cloud storage for enterprises. Some estimates point to a 90% reduction in cloud storage costs over the last decade, with cloud adoption exceeding 90% among enterprises in developed markets and over 30% in emerging markets.
Generative AI is catalyzing a paradigm shift in how we think of and interact with software. A new generation of startups is set to capitalize on a future with vast productivity gains and build new experiences for better discoverability and decision-making.
Acknowledgements
This article was co-authored alongside Aly Khairy and with support from Maria Najjar