Community
The problem: Generative AI Large Language Models (LLMs) can only answer questions or complete tasks based on what they been trained on - unless they’re given access to external knowledge, like your organization’s knowledge.
LLMs can be fine-tuned, but it's an expensive endeavor. As a result, deploying enterprise context in conjunction with LLMs has become formalized in retrieval augmented generation (RAG) pipelines. Two such contextual vehicles are vector embeddings and knowledge graphs. Both have been deployed in financial services workflows, for the most part prototyping new contextual AI pipelines but increasingly, as developers, AI subject matter experts and others have become more confident with using GenAI, production is increasingly in scope. It is hard to believe that only a year ago RAG entered the AI dictionary with zest. Furthermore, a mere six months or so prior, on November 30th, 2022, ChatGPT got "quietly" launched.
Of these, vector databases—searchable stores of vector embeddings, numerical representations of (often unstructured) data crafted to capture data’s semantic meaning within a high-dimensional vector space—have captured AI mindshare. I'm perhaps biased but back in May 2023 at least I was starting to bring a vector database to market, and we in the vector database world felt we held the advantage. Unicorns like Pinecone, with revenues of just a handful of millions got valued at $750m - my then company wanted to achieve similar - while the vector embeddings approach underpinning vector databases was predicated on similar transformer technologies that drove LLMs. It felt like a great marriage. As often in hype cycles, your perceptions get quickly challenged.
For example, some features of vector databases carry more overhead than you imagine, not quite off-the-shelf for simple tasks. For example, you need:
An embeddings model that supports your preferred sentiment
Translation of data sources into a structured embeddings model; the larger the document or asset, the more costly and untimely the translation
Indexing and metadata handling
Efficient search
Why Graphs?
For use cases where knowledge, particularly entity-centered knowledge, is already known, vector databases can be like using a stochastic sledgehammer to crack a nut. Thus, knowledge graph advocates have made a strong case that their inherently knowledge-centric graph technology should be adopted—often standalone, sometimes in tandem with a vector store, so-called Graph RAG. When I worked with my vector database company, I envied the powerful conference presentations I saw graph database vendors, like Neo4J, ArangoDB and Ontotext (based on GraphDB), advocate on the Big Data circuit. Knowledge graphs, as they discussed, were naturally intuitive, simpler and more transparent containers of knowledge that you could pass to LLMs bringing your organization's essential perspectives on, say, fraud networks or risk lending patterns in conjunction with the vast, anarchic knowledge contained in the LLM. For a time, I became a believer in graph databases, though I now see other more flexible vehicles of porting contextual knowledge graphs to LLMs in RAG which I'll explore later in the article. Certainly a graph database is not a requirement.
At the same time as my graph awakening, in May 2023, a wonderfully creative post appeared titled Generative AI is Graph’s Biggest Ever Opportunity asked GPT4 when it was best to apply knowledge graphs or a vector database. I have adapted some of the questions asked in this article and instead posed them to my Microsoft Copilot. The copilot responses feel more long-winded than the originals, but they also come, perhaps, with more experience than they did then.
How the LLM Determines Graph or Vector
I cite below my two prompts and the copilot responses, noting I've occasionally shortened them for brevity.
Prompt 1: You're given two stores of external information to choose from a knowledge graph or a vector database. Think about the steps involved in completing a question or task you (as an LLM) are given. Which of these two store types would be more likely to give you the text you need?
Microsoft Copilot: Certainly! Let’s explore the characteristics of both a knowledge graph and a vector database to determine which one would be more likely to provide the relevant text for a given question or task:
Knowledge Graph:
A knowledge graph is a structured representation of information using nodes (entities) and edges (relationships). Each node corresponds to a concept, and the edges denote connections between these concepts.
Structured: Knowledge graphs have a well-defined schema, allowing precise modeling of relationships.
Rich Context: Nodes can have attributes, and relationships carry specific meanings.
Reasoning: Knowledge graphs support inferencing and reasoning.
Use Cases:
Question Answering: Knowledge graphs directly provide answers based on their structured nature.
Entity Linking: They excel at linking entities across different domains.
Recommendation Systems: Graph-based recommendations leverage relationships.
Semantic Search: Graph-based search captures context and semantics….
Example: If you need information about the relationship between “Albert Einstein” and the “Theory of Relativity,” a knowledge graph would be helpful.
Vector Database:
A vector database stores data as high-dimensional vectors (numerical representations) in a vector space.
Embeddings: Each entity or concept is represented as a vector.
Similarity: Similar entities have similar vector representations.
Scalability: Vector databases handle large-scale data efficiently.
Similarity Search: Vector similarity allows finding similar items (e.g., images, documents).
Recommendations: Vectors enable personalized recommendations.
Anomaly Detection: Detecting outliers based on vector distances.
Example: If you want to find similar documents based on their content, a vector database would be more suitable.
In summary, if the task involves structured relationships, context, and reasoning, a knowledge graph is preferable. For similarity-based tasks or scalability, a vector database would be more useful.
Prompt 2: Hallucination is a significant problem for LLMs such as yourself. If the information you need to perform a task is available to you in a knowledge graph, do you think this problem could be eliminated?
Microsoft Copilot: Hallucination refers to the phenomenon where a language model generates responses that are plausible-sounding but factually incorrect or invented... Let’s discuss how a knowledge graph might help mitigate this issue:
Structured Information:
A knowledge graph provides structured information with well-defined relationships. Each node (entity) and edge (relationship) has a specific meaning.
By relying on a knowledge graph, I can access factual information directly from the graph rather than generating it based on probabilities... The knowledge graph can supply the correct answer without relying on creative generation.
Reducing Ambiguity:
Hallucination often occurs due to ambiguity in natural language. A knowledge graph can disambiguate terms by linking them to specific entities.
For instance, if the query involves the term “Apple,” the knowledge graph can distinguish between the fruit and the technology company.
Reasoning and Context:
Knowledge graphs support reasoning and context-aware retrieval. If I need to infer information, the graph can guide me logically...
Fact Verification:
A knowledge graph allows fact-checking against reliable sources. If I encounter a dubious claim, I can verify it using the graph...
However, the complete elimination of hallucinations is challenging even with a knowledge graph:
Incompleteness: Knowledge graphs may not cover all possible facts or recent developments.
Contextual Nuances: Some queries require context beyond what the graph provides.
Creative Generation: Even with factual data, generating creative responses (e.g., poems, stories) involves additional reasoning.
Use Cases, Entities and Fact-Checking: Concluding Comments
The Microsoft Copilot responses are interesting. When determining your RAG approach, use case matters.
When knowledge of entities and identities is at stake, knowledge graphs are better. In particular, when entities are known, likely resolved through a process called entity resolution, far better to take that entity knowledge directly to the LLM.
However, when your AI requires complex unstructured datasets, searchable vector databases are a great medium. Consider a bank determining pricing details for a fixed income portfolio of 100 OTC bonds, each bond with distinct non-standard terms embedded within PDF documents. That said, some knowledge graph providers, I have discovered, overlay and take information to feed knowledge graphs directly from unstructured data sources in addition to structured sources. For example, consider analyzing entity network information in conjunction with suspicious activity reports.
Other cases might find relevance for both, for example, retail banking customer recommendations—known, networked (graphed) knowledge of a customer’s banking preferences can illuminate their likely next purchase, while a a personalized similarity search conducted through vectors could also work. In many instances, graph and vector approaches can and do work well together.
For hallucinations, however, knowledge graphs provide clear, traceable “fact-checking” through the logical guidance contained within the graph structure. That compares well against the stochastic nature of vector search.
To be useful to the enterprise, LLMs must access external knowledge and data. Knowledge graphs offer a powerful repository of direct, traceable, contextual knowledge of entities and knowledge structures for RAG pipelines.
Your LLM will indeed wish you had used a knowledge graph!
This content is provided by an external author without editing by Finextra. It expresses the views and opinions of the author.
Roman Eloshvili Founder and CEO at XData Group
02 August
Konstantin Rabin Head of Marketing at Kontomatik
Denys Boiko Founder at Erglis
01 August
Michael Zetser CEO at Flyfish
Welcome to Finextra. We use cookies to help us to deliver our services. You may change your preferences at our Cookie Centre.
Please read our Privacy Policy.