• Skip to primary navigation
  • Skip to main content
  • Skip to footer
Bluetab

Bluetab

an IBM Company

  • SOLUTIONS
    • DATA STRATEGY
    • Data Readiness
    • Data Products AI
  • Assets
    • TRUEDAT
    • FASTCAPTURE
    • Spark Tune
  • About Us
  • Our Offices
    • Spain
    • Mexico
    • Peru
    • Colombia
  • talent
    • Spain
    • TALENT HUB BARCELONA
    • TALENT HUB BIZKAIA
    • TALENT HUB ALICANTE
    • TALENT HUB MALAGA
  • Blog
  • EN
    • ES

Bluetab

MDM as a Competitive Advantage in Organizations

June 18, 2024 by Bluetab

MDM as a Competitive Advantage in Organizations

Maryury García

Cloud | Data & Analytics

Just like natural resources, data acts as the driving fuel for innovation, decision-making, and value creation across various sectors. From large tech companies to small startups, digital transformation is empowering data to become the foundation that generates knowledge, optimizes efficiency, and offers personalized experiences to users.

Master Data Management (MDM) plays an essential role in providing a solid structure to ensure the integrity, quality, and consistency of data throughout the organization.

Despite this discipline existing since the mid-90s, some organizations have not fully adopted MDM. This could be due to various factors such as a lack of understanding of its benefits, cost, complexity, and/or maintenance.

According to a Gartner survey, the global MDM market was valued at $14.6 billion in 2022 and is expected to reach $24 billion by 2028, with a compound annual growth rate (CAGR) of 8.2%.

Figura 01: CAGR en el mercado global MDM

Before diving into the world of MDM, it is important to understand some relevant concepts. To manage master data, the first question we ask is: What is master data? Master data constitutes the set of shared, essential, and critical data for business execution. It has a lifecycle (validity period) and contains key information for the organization’s operation, such as customer data, product information, account numbers, and more.

Once defined, it is important to understand their characteristics, as master data is unique, persistent, and integral, with broad coverage, among other qualities. This is vital to ensure consistency and quality.

Therefore, it is essential to have an approach that considers both organizational aspects (identification of data owners, impacted users, matrices, etc.) as well as processes (related to policies, workflows, procedures, and mappings). Hence, our proposal at Bluetab on this approach is summarized in each of these dimensions.

Figura 02: Caso de Uso: Enfoque Datos Maestros

Another aspect to consider from our experience with master data, which is key to starting an organizational implementation, is understanding its “lifecycle.” This includes:

  • The business areas inputting the master data (referring to the areas that will consume the information).
  • The processes associated with the master data (that create, block, report, update the master data attributes—in other words, the treatment that the master data will undergo).
  • The areas outputting the master data (referring to the areas that ultimately maintain the master data).
  • All of this is intertwined with the data owners and supported by associated policies, procedures, and documentation.
Figura 03: Caso de Uso: Matriz del ciclo de vida del Dato Maestro

Master Data Management (MDM) is a “discipline,” and why? Because it brings together a set of knowledge, policies, practices, processes, and technologies (referred to as a technological tool to collect, store, manage, and analyze master data). This allows us to conclude that it is much more than just a tool.

Below, we provide some examples that will help to better understand the contribution of proper master data management in various sectors:

  • Retail Sector: Retail companies, for example, a bakery, would use MDM to manage master data for product catalogs, customers, suppliers, employees, recipes, inventory, and locations. This creates a detailed customer profile to ensure a consistent and personalized shopping experience across all sales channels.
  • Financial Sector: Financial institutions could manage customer data, accounts, financial products, pricing, availability, historical transactions, and credit information. This helps improve the accuracy and security of financial transactions and operations, as well as verify customer identities before opening an account.
  • Healthcare Sector: In healthcare, the most important data is used to manage patient data, procedure data, diagnostic data, imaging data, medical facilities, and medications, ensuring the integrity and privacy of confidential information. For example, a hospital can use MDM to generate an EMR (Electronic Medical Record) for each patient.
  • Telecommunications Sector: In telecommunications, companies use MDM to manage master data for their devices, services, suppliers, customers, and billing.

In Master Data Management, the following fundamental operations are performed: data cleaning, which removes duplicates; data enrichment, which ensures complete records; and the establishment of a single source of truth. The time it may take depends on the state of the organization’s records and its business objectives. Below, we can visualize the tasks that are carried out:

Figura 04: Tareas claves MDM

Now that we have a clearer concept, it’s important to keep in mind that the strategy for managing master data is to keep it organized: up-to-date, accurate, non-redundant, consistent, and integral.

What benefits does implementing an MDM provide?

  • Data Quality and Consistency: Improves the quality of master data by eliminating duplicates and correcting errors, ensuring the integrity of information throughout the organization.
  • Efficiency and Resource Savings: Saves time and resources by automating tasks of data cleaning, enrichment, and updating, freeing up staff for more strategic tasks.
  • Informed Decision-Making: Allows the identification of patterns and trends from reliable data, driving strategic and timely decision-making.
  • Enhanced Customer Experience: Improves the customer experience by providing a 360-degree view of the customer, enabling more personalized and relevant interactions.
  • At Bluetab, we have helped clients from various industries with their master data strategy, from the definition, analysis, and design of the architecture to the implementation of an integrated solution. From this experience, we share these 5 steps to help you start managing master data:

List Your Objectives and Define a Scope

First, identify which data entities are of commercial priority within the organization. Once identified, evaluate the number of sources, definitions, exceptions, and volumes that the entities have.

Define the Data You Will Use

Which part of the data is important for decision-making? It could simply be all or several fields of the record to fill in, such as name, address, and phone number. Get support from governance personnel for the definition.

Establish Processes and Owners

Who will be responsible for having the rights to modify or create the data? For what and how will this data be used to reinforce or enhance the business? Once these questions are formulated, it is important to have a process for how the information will be handled from the master data registration to its final sharing (users or applications).

Seek Scalability

Once you have defined the processes, try to ensure they can be integrated with future changes. Take the time to define your processes and avoid making drastic changes in the future.

Find the Right Data Architecture, Don’t Take Shortcuts

Once the previous steps are defined and generated, it’s time to approach your Big Data & Analytics strategic partner to ensure these definitions are compatible within the system or databases that house your company’s information.

Figura 05: Primeros Pasos MDM

Final Considerations

Based on our experience, we suggest considering the following aspects when assessing/defining the process for each domain in master data management, subject to the project scope:

  • Management of Routes:
    • Consider how the owner of the creation of master data registers it (automatically and eliminating manual data entry from any other application) and how any current process of an area/person centralizes the information from other areas involved in the master data manually (emails, calls, Excel sheets, etc.). This should be automated in a workflow.
  • Alerts & Notifications:
    • It is recommended to establish deadlines for the completeness of the data for each area and the responsible party updating a master data.
    • The time required to complete each data entry should be agreed upon among all involved areas, and alerts should be configured to communicate the updated master data.
  • Blocking and Discontinuation Processes:
    • A viable alternative is to make these changes operationally and then communicate them to the MDM through replication.
  • Integration:
    • Evaluate the possibility of integrating with third parties to automate the registration process for clients, suppliers, etc., and avoid manual entry: RENIEC, SUNAT, Google (coordinates X, Y, Z), or other agents, evaluating suitability for the business.
  • Incorporation of Third Parties:
    • Consider the incorporation of clients and suppliers at the start of the master data creation flows and at the points of updating.
Figura 06: Aspectos a considerar MDM

In summary, master data is the most important common data for an organization and serves as the foundation for many day-to-day processes at the enterprise level. Master data management helps ensure that data is up-to-date, accurate, non-redundant, consistent, integral, and properly shared, providing tangible benefits in data quality, operational efficiency, informed decision-making, and customer experience. This contributes to the success and competitiveness of the organization in an increasingly data-driven digital environment.

If you found this article interesting, we appreciate you sharing it. At Bluetab, we look forward to hearing about the challenges and needs you have in your organization regarding master and reference data.

Maryury García

Cloud | Data & Analytics

Do you want to learn more about what we offer and see other success stories?
DISCOVER BLUETAB

SOLUCIONES, SOMOS EXPERTOS

DATA STRATEGY
DATA FABRIC
AUGMENTED ANALYTICS

Te puede interesar

Gobierno del Dato: Una mirada en la realidad y el futuro

May 18, 2022
LEER MÁS

El futuro del Cloud y GenIA en el Next ’23

September 19, 2023
LEER MÁS

Databricks on Azure – An architecture perspective (part 2)

March 24, 2022
LEER MÁS

Serverless Microservices

October 14, 2021
LEER MÁS

Databricks on AWS – An Architectural Perspective (part 1)

March 5, 2024
LEER MÁS

De documentos en papel a datos digitales con Fastcapture y Generative AI

June 7, 2023
LEER MÁS

Filed Under: Blog, Tech

Oscar Hernández, new CEO of Bluetab LATAM.

May 16, 2024 by Bluetab

Oscar Hernández, new CEO of Bluetab LATAM.

Bluetab

  • Oscar assumes the responsibility of developing, leading, and executing Bluetab's strategy in Latin America, with the aim of expanding the company's products and services.
  • Bluetab in the Americas began operations in 2012 and has a presence in Colombia, Mexico, and Peru.

Oscar Hernández Rosales takes on the responsibility as CEO of Bluetab LATAM and will be in charge of developing, leading, and executing Bluetab’s strategy in the region, with the objective of expanding the company’s products and services to support the continuous digital transformation of its clients and the creation of value.

During this transition, Oscar will continue to serve as Country Manager of Mexico, ensuring effective coordination between our local operations and our regional strategy, further strengthening our position in the market.

"This new challenge is a privilege for me. I am committed to leading with vision, continuing to strengthen the Bluetab culture, and working for the well-being of our collaborators and the success of the business. An important challenge in an industry that constantly adapts to the evolution of new technologies. Bluetab innovates, anticipates, and is dedicated to providing the best customer experience, supported by a professional, talented, and passionate team that understands the needs of organizations," says Oscar.

Would you like to learn more about what we offer and see other success stories?
DISCOVER BLUETAB

SOLUCIONES, SOMOS EXPERTOS

DATA STRATEGY
DATA FABRIC
AUGMENTED ANALYTICS

Te puede interesar

Databricks on AWS – An Architectural Perspective (part 2)

March 5, 2024
LEER MÁS

Boost Your Business with GenAI and GCP: Simple and for Everyone

March 27, 2024
LEER MÁS

Data Governance: trend or need?

October 13, 2022
LEER MÁS

5 common errors in Redshift

December 15, 2020
LEER MÁS

LakeHouse Streaming on AWS with Apache Flink and Hudi (Part 2)

October 4, 2023
LEER MÁS

Leadership changes at Bluetab EMEA

April 3, 2024
LEER MÁS

Filed Under: Blog, Noticias

Leadership changes at Bluetab EMEA

April 3, 2024 by Bluetab

Leadership changes at Bluetab EMEA

Bluetab

  • Luis Malagón, as the new CEO of Bluetab EMEA, assumes the highest position in the company in the region.
  • Meanwhile, Tom Uhart will continue to drive the development of the Data and Artificial Intelligence Offering, enhancing Bluetab's positioning.

Photo: Luis Malagón, CEO of Bluetab EMEA, and Tom Uhart, Co-Founder y Data & AI Offering Lead

Luis Malagón becomes the new CEO of Bluetab EMEA after more than 10 years of experience within the company, having contributed significantly to its success and positioning. His proven leadership qualities position him perfectly to drive Bluetab in its next phase of growth.

‘This new challenge leading the EMEA region is a great opportunity to continue fostering a customer-oriented culture and enhancing their transformation processes. Collaboration is part of our DNA, and this, combined with an exceptional team, positions us in the right place at the right time. Together with IBM Consulting, we will continue to lead the market in Data and Artificial Intelligence solutions’, states Luis.

At Bluetab, we have been leading the data sector for nearly 20 years. Throughout this time, we have adapted to various trends and accompanied our clients in their digital transformation, and now we continue to do so with the arrival of Generative AI.

Tom Uhart’s new journey

Tom Uhart, Co-Founder of Bluetab and until now CEO of EMEA, will continue to drive the project from his new role as Data & AI Offering Lead. In this way, Tom will continue to enhance the company’s positioning and international expansion hand in hand with the IBM group and other key players in the sector.

‘Looking back, I am very proud to have seen Bluetab grow over all these years. A team that stands out for its great technical talent, rebellious spirit, and culture of closeness. We have achieved great goals, overcome obstacles, and created a legacy of which we can all be proud. Now it's time to leave the next stage of Bluetab's growth in Luis's hands, which I am sure will be a great success and will take the company to the next level’, says Tom.

Do you want to know more about what we offer and to see other success stories?
DISCOVER BLUETAB

SOLUTIONS, WE ARE EXPERTS

DATA STRATEGY
DATA FABRIC
AUGMENTED ANALYTICS

You may be interested in

Some of the capabilities of Matillion ETL on Google Cloud

July 11, 2022
READ MORE

CLOUD SERVICE DELIVERY MODELS

June 27, 2022
READ MORE

Container vulnerability scanning with Trivy

March 22, 2024
READ MORE

¿Existe el Azar?

November 10, 2021
READ MORE

Hashicorp Boundary

December 3, 2020
READ MORE

LakeHouse Streaming on AWS with Apache Flink and Hudi (Part 1)

April 11, 2023
READ MORE

Filed Under: Blog, Blog, Noticias, Noticias

Boost Your Business with GenAI and GCP: Simple and for Everyone

March 27, 2024 by Bluetab

Alfonso Zamora
Cloud Engineer

Introduction

The main goal of this article is to present a solution for data analysis and engineering from a business perspective, without requiring specialized technical knowledge.

Companies have a large number of data engineering processes to extract the most value from their business, and sometimes, very complex solutions for the required use case. From here, we propose to simplify the operation so that a business user, who previously could not carry out the development and implementation of the technical part, will now be self-sufficient, and will be able to implement their own technical solutions with natural language.

To fulfill our goal, we will make use of various services from the Google Cloud platform to create both the necessary infrastructure and the different technological components to extract all the value from business information.

Before we begin

Before we begin with the development of the article, let’s explain some basic concepts about the services and different frameworks we will use for implementation:

  1. Cloud Storage[1]: It is a cloud storage service provided by Google Cloud Platform (GCP) that allows users to securely and scalably store and retrieve data.
  2. BigQuery[2]: It is a fully managed data analytics service that allows you to run SQL queries on massive datasets in GCP. It is especially effective for large-scale data analysis.
  3. Terraform[3]: It is an infrastructure as code (IaC) tool developed by HashiCorp. It allows users to describe and manage infrastructure using configuration files in the HashiCorp Configuration Language (HCL). With Terraform, you can define resources and providers declaratively, making it easier to create and manage infrastructure on platforms like AWS, Azure, and Google Cloud.
  4. PySpark[4]: It is a Python interface for Apache Spark, an open-source distributed processing framework. PySpark makes it easy to develop parallel and distributed data analysis applications using the power of Spark.
  5. Dataproc[5]: It is a cluster management service for Apache Spark and Hadoop on GCP that enables efficient execution of large-scale data analysis and processing tasks. Dataproc supports running PySpark code, making it easy to perform distributed operations on large datasets in the Google Cloud infrastructure.

What is an LLM?

An LLM (Large Language Model) is a type of artificial intelligence (AI) algorithm that utilizes deep learning techniques and massive datasets to comprehend, summarize, generate, and predict new content. An example of an LLM could be ChatGPT, which makes use of the GPT model developed by OpenAI.

In our case, we will be using the Codey model (code-bison), which is a model implemented by Google that is optimized for generating code as it has been trained specifically for this specialization, which is part of the VertexAI stack.

However, it’s not only important which model we are going to use, but also how we are going to use it. By this, I mean it’s necessary to understand the input parameters that directly affect the responses our model will provide, among which we can highlight the following:

  • Temperature: This parameter controls the randomness in the model’s predictions. A low temperature, such as 0.1, generates more deterministic and focused results, while a high temperature, such as 0.8, introduces more variability and creativity in the model’s responses.
  • Prefix (Prompt): The prompt is the input text provided to the model to initiate text generation. The choice of prompt is crucial as it guides the model on the specific task expected to be performed. The formulation of the prompt can influence the quality and relevance of the model’s responses, although the length should be considered to meet the maximum number of input tokens, which is 6144.
  • Output Tokens (max_output_tokens): This parameter limits the maximum number of tokens that will be generated in the output. Controlling this value is useful for avoiding excessively long responses or for adjusting the output length according to the specific requirements of the application.
  • Candidate Count: This parameter controls the number of candidate responses the model generates before selecting the best option. A higher value can be useful for exploring various potential responses, but it will also increase computational cost.

Development of the prompt

Once we have defined the parameters and understand well what each of them is for, and we comprehend what a prompt is, let’s focus on how to use it and implement one that can adapt to our needs.

As mentioned earlier, the goal is to generate both PySpark code and Terraform in order to perform infrastructure creation and data processing tasks. Since these are completely different tasks, as a first important decision for our prompt, we have chosen to divide it into two specific parts so that each prompt is trained with examples to generate one language or the other.

For each prompt, an introduction is made to specify what the objective will be and what requests will be made, followed by a series of examples in which input in natural language is given simulating a request, and then the desired output is also given to assign the text to the specific code. The goal is to generate a structured prompt that can be efficiently processed by the model so that in the following cases, it can associate the available examples with appropriate responses.

Let’s put these small tips into practice to see one of the entries to the Terraform prompt:

input: I need to read a file named `myfile.csv` inside the folder `data/input` in a bucket called `application-data-input`, and I need to create a new dataset and table in BigQuery called `tst_tf_ai`. The process has to read the file, remove the first three characters of the name, and write the result in BigQuery. The schema of the table will be ‘name’ of String type and ‘surname’ of String type. The infrastructure has to be deployed in the project `project-cloud-223` in GCP, and region `europe-southwest1`.
output: “`hcl
terraform {
 required_version = “>= 0.15”
 required_providers {
  google = “~> 5.0”
 }
}

resource “google_storage_bucket” “temporary_files” { 
 project       = “project-cloud-223” 
 name          = “temporary-files-dataproc” 
 location      = “europe-southwest1” 
 force_destroy = true 
} 

resource “google_storage_bucket” “bucket1” { 
 project       = “project-cloud-223” 
 name          = “application-data-input” 
 location      = “europe-southwest1” 
 force_destroy = true 
} 

resource “google_storage_bucket_object” “folder1” {
 name    = “data/input/”
 bucket  = google_storage_bucket.bucket1.name
 content = ” “
}

resource “google_bigquery_dataset” “dataset” { 
 project       = “project-cloud-223” 
 dataset_id    = “tst_tf_ai” 
 friendly_name = “tst_tf_ai” 
 description   = “Description” 
 location      = “europe-southwest1” 
} 

resource “google_bigquery_table” “table” { 
 project              = “project-cloud-223” 
 dataset_id           = google_bigquery_dataset.dataset.dataset_id 
 deletion_protection  = false
 table_id             = “tst_tf_ai” 
 schema               = <<EOF
[ 
 { 
  “name”: “name”, 
  “type”: “STRING”, 
  “mode”: “NULLABLE”, 
  “description”: “The name” 
 }, 
 { 
  “name”: “surname”, 
  “type”: “STRING”, 
  “mode”: “NULLABLE”, 
  “description”: “The surname” 
 }
] 
EOF 
} 
“`

Author Name

It’s important to implement examples as closely as possible to your use case so that the responses are more accurate, and also to have plenty of examples with a variety of requests to make it smarter when returning responses. One of the practices to make the prompt implementation more interactive could be to try different requests, and if it’s unable to do what’s been asked, the instructions should be modified.

As we have observed, developing the prompt does require technical knowledge to translate requests into code, so this task should be tackled by a technical person to subsequently empower the business user. In other words, we need a technical person to generate the initial knowledge base so that business users can then make use of these types of tools.

It has also been noticed that generating code in Terraform is more complex than generating code in PySpark, so more input examples were required in creating the Terraform prompt to tailor it to our use case. For example, we have applied in the examples that in Terraform it always creates a temporary bucket (temporary-files-dataproc) so that it can be used by Dataproc.

Practical Cases

Three examples have been carried out with different requests, requiring more or less infrastructure and transformations to see if our prompt is robust enough.

In the file ai_gen.py, we see the necessary code to make the requests and the three examples, in which it is worth highlighting the configuration chosen for the model parameters:

  • It has been decided to set the value of candidate_count to 1 so that it has no more than one valid final response to return. Additionally, as mentioned, increasing this number also entails increased costs.
  • The max_output_tokens has been set to 2048, which is the highest number of tokens for this model, as if it needs to generate a response with various transformations, it won’t fail due to this limitation.
  • The temperature has been varied between the Terraform and PySpark code. For Terraform, we have opted for 0 so that it always gives the response that is considered closest to our prompt, ensuring it doesn’t generate more than strictly necessary for our objective. In contrast, for PySpark, we have opted for 0.2, which is a low temperature to prevent excessive creativity, yet still allowing it to provide diverse responses with each call, enabling performance testing among them.

We are going to carry out an example of a request that is available in the following GitHub repository, where it is detailed step by step in the README to be able to execute it yourself. The request is as follows:

In the realm of ‘customer_table,’ my objective is the seamless integration of pivotal fields such as ‘customer_id’, ‘name’, and ’email’. These components promise to furnish crucial insights into the essence of our valued customer base.

Conversely, when delving into the nuances of ‘sales_table,’ the envisioned tapestry includes essential elements like ‘order_id’ ‘product’ ‘price’, ‘amount’ and ‘customer_id’. Theseattributes, meticulously curated, will play a pivotal role in the nuanced exploration and analysis of sales-related data.

The ‘bigtable_info’ table will have all the fields resulting from the union of the two tables, ‘customer_table’ and ‘sales_table.’ Here, the outcome of joining the two tables by the ‘customer_id’ numeric field will be stored.

Furthermore, as part of our meticulous data collection strategy, I plan to inaugurate a dedicated Google Cloud Storage bucket christened ‘sales-etl-bucket.’ This repository is strategically designed to serve as a robust container for collating data, particularly hailing from CSV files. This endeavor is set to enhance the efficiency and organizational prowess of our data management processes.

To enhance the structure, two distinct subfolders, ‘input/sales’ and ‘input/customers’ have been ingeniously incorporated within the ‘sales-etl-bucket,’ ensuring a refined organization of the incoming data streams.

You will need to read the CSV files within the ‘input/sales’ folder to write the data into the ‘sales_table’. Additionally, you should perform the same operation with the files from the ‘input/customers’ folder to write the data into the ‘customer_table’. Finally, you are required to perform a join between the sales and customers information based on the ‘customer_id’ number field and write the merged information into the ‘bigtable_info’.

Author Name

We execute and will analyze the outputs generated in the folders generated/customers/python/etl.py and generated/terraform/main.tf.

terraform {
 required_version = ">= 0.15"
 required_providers {
   google = "~> 5.0"
 }
}


resource "google_storage_bucket" "temporary_files" {
 project       = "my-cloud-project"
 name          = "temporary-files-dataproc"
 location      = "europe-southwest1"
 force_destroy = true
}


resource "google_storage_bucket" "sales_etl_bucket" {
 project       = "my-cloud-project"
 name          = "sales-etl-bucket"
 location      = "europe-southwest1"
 force_destroy = true
}


resource "google_storage_bucket_object" "sales_folder" {
 name    = "input/sales/"
 bucket  = google_storage_bucket.sales_etl_bucket.name
 content = " "
}


resource "google_storage_bucket_object" "customers_folder" {
 name    = "input/customers/"
 bucket  = google_storage_bucket.sales_etl_bucket.name
 content = " "
}


resource "google_bigquery_dataset" "raw_sales_data" {
 project       = "my-cloud-project"
 dataset_id    = "raw_sales_data"
 friendly_name = "Raw Sales Data"
 description   = "This dataset contains the raw sales data."
 location      = "europe-southwest1"
}


resource "google_bigquery_table" "customer_table" {
 project              = "my-cloud-project"
 dataset_id           = google_bigquery_dataset.raw_sales_data.dataset_id
 deletion_protection  = false
 table_id             = "customer_table"
 schema               = <<EOF
[
 {
   "name": "customer_id",
   "type": "INT64",
   "mode": "REQUIRED",
   "description": "The customer ID."
 },
 {
   "name": "name",
   "type": "STRING",
   "mode": "NULLABLE",
   "description": "The customer's name."
 },
 {
   "name": "email",
   "type": "STRING",
   "mode": "NULLABLE",
   "description": "The customer's email address."
 }
]
EOF
}


resource "google_bigquery_table" "sales_table" {
 project              = "my-cloud-project"
 dataset_id           = google_bigquery_dataset.raw_sales_data.dataset_id
 deletion_protection  = false
 table_id             = "sales_table"
 schema               = <<EOF
[
 {
   "name": "order_id",
   "type": "INT64",
   "mode": "REQUIRED",
   "description": "The order ID."
 },
 {
   "name": "product",
   "type": "STRING",
   "mode": "NULLABLE",
   "description": "The product name."
 },
 {
   "name": "price",
   "type": "FLOAT64",
   "mode": "NULLABLE",
   "description": "The product price."
 },
 {
   "name": "amount",
   "type": "INT64",
   "mode": "NULLABLE",
   "description": "The product amount."
 },
 {
   "name": "customer_id",
   "type": "INT64",
   "mode": "REQUIRED",
   "description": "The customer ID."
 }
]
EOF
}


resource "google_bigquery_dataset" "master_sales_data" {
 project       = "my-cloud-project"
 dataset_id    = "master_sales_data"
 friendly_name = "Master Sales Data"
 description   = "This dataset contains the master sales data."
 location      = "europe-southwest1"
}


resource "google_bigquery_table" "bigtable_info" {
 project              = "my-cloud-project"
 dataset_id           = google_bigquery_dataset.master_sales_data.dataset_id
 deletion_protection  = false
 table_id             = "bigtable_info"
 schema               = <<EOF
[
 {
   "name": "customer_id",
   "type": "INT64",
   "mode": "REQUIRED",
   "description": "The customer ID."
 },
 {
   "name": "name",
   "type": "STRING",
   "mode": "NULLABLE",
   "description": "The customer's name."
 },
 {
   "name": "email",
   "type": "STRING",
   "mode": "NULLABLE",
   "description": "The customer's email address."
 },
 {
   "name": "order_id",
   "type": "INT64",
   "mode": "REQUIRED",
   "description": "The order ID."
 },
 {
   "name": "product",
   "type": "STRING",
   "mode": "NULLABLE",
   "description": "The product name."
 },
 {
   "name": "price",
   "type": "FLOAT64",
   "mode": "NULLABLE",
   "description": "The product price."
 },
 {
   "name": "amount",
   "type": "INT64",
   "mode": "NULLABLE",
   "description": "The product amount."
 }
]
EOF
}


We see that it will add a bucket with two folders to upload the data to the corresponding folder depending on whether it is sales or customer data. It has also created two datasets, one for the raw layer to have the data as it came from the source with the tables of customers and sales, and the second dataset for the final table that will store the result of joining the information from both tables.

Let’s review the PySpark code:

from pyspark.sql import SparkSession
from pyspark.sql import functions as F


# Configure Spark session
spark = SparkSession.builder.appName("SalesETL").getOrCreate()


# Define file paths
sales_path = "gs://sales-etl-bucket/input/sales/*.csv"
customers_path = "gs://sales-etl-bucket/input/customers/*.csv"


# Read CSV files into DataFrames
sales_df = spark.read.csv(sales_path, header=True, inferSchema=True)
customers_df = spark.read.csv(customers_path, header=True, inferSchema=True)


# Write tables to BigQuery
sales_df.write.format("bigquery") \
   .option("temporaryGcsBucket", "temporary-files-dataproc") \
   .option("table",  "raw_sales_data.sales_table") \
   .mode("overwrite") \
   .save()
customers_df.write.format("bigquery") \
   .option("temporaryGcsBucket", "temporary-files-dataproc") \
   .option("table",  "raw_sales_data.customer_table") \
   .mode("overwrite") \
   .save()


# Join sales and customers tables
bigtable_info_df = sales_df.join(customers_df, on="customer_id", how="inner")


# Write joined table to BigQuery
bigtable_info_df.write.format("bigquery") \
   .option("temporaryGcsBucket", "temporary-files-dataproc") \
   .option("table",  "master_sales_data.bigtable_info") \
   .mode("overwrite") \
   .save()


# Stop the Spark session
spark.stop()

It can be observed that the generated code reads from each of the folders and inserts each data into its corresponding table.

Para poder asegurarnos de que el ejemplo está bien realizado, podemos seguir los pasos del README en el repositorio GitHub[8] para aplicar los cambios en el código terraform, subir los ficheros de ejemplo que tenemos en la carpeta example_data y a ejecutar un Batch en Dataproc. 

Finally, we check if the information stored in BigQuery is correct:

  • Table customer:
  • Tabla sales:
  • Final table:

This way, we have managed to have a fully operational functional process through natural language. There is another example that can be executed, although I also encourage creating more examples, or even improving the prompt, to incorporate more complex examples and also adapt it to your use case.

Conclusions and Recommendations

As the examples are very specific to particular technologies, any change in the prompt in any example can affect the results, or also modifying any word in the input request. This means that the prompt is not robust enough to assimilate different expressions without affecting the generated code. To have a productive prompt and system, more training and different variety of solutions, requests, expressions, etc., are needed. With all this, we will finally be able to have a first version to present to our business user so that they can be autonomous.

Specifying the maximum possible detail to an LLM is crucial for obtaining precise and contextual results. Here are several tips to keep in mind to achieve appropriate results:

  • Clarity and Conciseness:
    • Be clear and concise in your prompt, avoiding long and complicated sentences.
    • Clearly define the problem or task you want the model to address.
  • Specificity:
    • Provide specific details about what you are looking for. The more precise you are, the better results you will get.
  • Variability and Diversity:
    • Consider including different types of examples or cases to assess the model’s ability to handle variability.
  • Iterative Feedback:
    • If possible, iterate on your prompt based on the results obtained and the model’s feedback.
  • Testing and Adjustment:
    • Before using the prompt extensively, test it with examples and adjust as needed to achieve desired results.

Future Perspectives

In the field of LLMs, future lines of development focus on improving the efficiency and accessibility of language model implementation. Here are some key improvements that could significantly enhance user experience and system effectiveness:

1. Use of different LLM models:

The inclusion of a feature that allows users to compare the results generated by different models would be essential. This feature would provide users with valuable information about the relative performance of the available models, helping them select the most suitable model for their specific needs in terms of accuracy, speed, and required resources.

2. User feedback capability:

Implementing a feedback system that allows users to rate and provide feedback on the generated responses could be useful for continuously improving the model’s quality. This information could be used to adjust and refine the model over time, adapting to users’ changing preferences and needs.

3. RAG (Retrieval-augmented generation)

RAG (Retrieval-augmented generation) is an approach that combines text generation and information retrieval to enhance the responses of language models. It involves using retrieval mechanisms to obtain relevant information from a database or textual corpus, which is then integrated into the text generation process to improve the quality and coherence of the generated responses.

Links of Interest

Cloud Storage[1]: https://cloud.google.com/storage/docs

BigQuery[2]: https://cloud.google.com/bigquery/docs

Terraform[3]: https://developer.hashicorp.com/terraform/docs

PySpark[4]: https://spark.apache.org/docs/latest/api/python/index.html

Dataproc[5]: https://cloud.google.com/dataproc/docs

Codey[6]: https://cloud.google.com/vertex-ai/generative-ai/docs/model-reference/code-generation

VertexAI[7]: https://cloud.google.com/vertex-ai/docs

GitHub[8]: https://github.com/alfonsozamorac/etl-genai

Table Of Contents
  1. Introduction
  2. Before we begin
  3. What is an LLM?
  4. Development of the prompt
  5. Practical Cases

Filed Under: Blog, Practices, Tech

Container vulnerability scanning with Trivy

March 22, 2024 by Bluetab

Container vulnerability scanning
with Trivy

Ángel Maroco

AWS Cloud Architect

Within the framework of security in container, the build phase is of vital importance as we need to select the base image on which applications will run. Not having automatic mechanisms for vulnerability scanning can lead to production environments with insecure applications with the risks that involves.

In this article we will cover vulnerability scanning using Aqua Security’s Trivy solution, but before we begin, we need to explain what the basis is for these types of solutions for identifying vulnerabilities in Docker images.

Introduction to CVE (Common Vulnerabilities and Exposures)

CVE is a list of information maintained by MITRE Corporation which is aimed at centralising the records of known security vulnerabilities, where each reference has a CVE-ID number, description of the vulnerability, which versions of the software are affected, possible fix for the flaw (if any) or how to configure to mitigate the vulnerability and references to publications or posts in forums or blogs where the vulnerability has been made public or its exploitation is demonstrated.

The CVE-ID provides a standard naming convention for uniquely identifying a vulnerability. They are classified into 5 typologies, which we will look at in the Interpreting the analysis section. These types are assigned based on different metrics (if you are curious, see CVSS Calculator v3).

CVE has become the standard for vulnerability recording, so it is used by the great majority of technology companies and individuals.

There are various channels for keeping informed of all the news related to vulnerabilities: official blog, Twitter, cvelist on GitHub and LinkedIn.

If you want more detailed information about a vulnerability, you can also consult the NIST website, specifically the NVD (National Vulnerability Database).

We invite you to search for one of the following critical vulnerabilities. It is quite possible that they have affected you directly or indirectly. We should forewarn you that they have been among the most discussed data-src=

  • CVE-2017-5753
  • CVE-2017-5754

If you detect a vulnerability, we encourage you to register it using the form below.

Aqua Security – Trivy

Trivy is an open source tool focused on detecting vulnerabilities in OS-level packages and dependency files for various languages:

  • OS packages: (Alpine, Red Hat Universal Base Image, Red Hat Enterprise Linux, CentOS, Oracle Linux, Debian, Ubuntu, Amazon Linux, openSUSE Leap, SUSE Enterprise Linux, Photon OS and Distroless)

  • Application dependencies: (Bundler, Composer, Pipenv, Poetry, npm, yarn and Cargo)

Aqua Security, a company specialising in development of security solutions, acquired Trivy in 2019. Together with a substantial number of collaborators, they are responsible for developing and maintaining it.

Installation

Trivy has installers for most Linux and MacOS systems. For our tests, we will use the generic installer:

curl -sfL https://raw.githubusercontent.com/aquasecurity/trivy/master/contrib/install.sh | sudo sh -s -- -b /usr/local/bin 

If we do not want to persist the binary on our system, we have a Docker image:

docker run --rm -v /var/run/docker.sock:/var/run/docker.sock -v /tmp/trivycache:/root/.cache/ aquasec/trivy python:3.4-alpine 

Basic operations

  • Local images

Trivy has installers for most Linux and MacOS systems. For our tests, we will use the generic installer:

#!/bin/bash
docker build -t cloud-practice/alpine:latest -<<EOF
FROM alpine:latest
RUN echo "hello world"
EOF

trivy image cloud-practice/alpine:latest 
  • Remote images
#!/bin/bash
trivy image python:3.4-alpine 
  • Local projects:
    Enable you to analyse dependency files (outputs):
    • Pipfile.lock: Python
    • package-lock_react.json: React
    • Gemfile_rails.lock: Rails
    • Gemfile.lock: Ruby
    • Dockerfile: Docker
    • composer_laravel.lock: PHP Lavarel
    • Cargo.lock: Rust
#!/bin/bash
git clone https://github.com/knqyf263/trivy-ci-test
trivy fs trivy-ci-test 
  • Public repositories:
#!/bin/bash
trivy repo https://github.com/knqyf263/trivy-ci-test 
  • Private image repositories:
    • Amazon ECR (Elastic Container Registry)
    • Docker Hub
    • GCR (Google Container Registry)
    • Private repositories with BasicAuth
  • Cache database
    The vulnerability database is hosted on GitHub. To avoid downloading this database in each analysis operation, you can use the --cache-dir <dir> parameter:
#!/bin/bash trivy –cache-dir .cache/trivy image python:3.4-alpine3.9 
  • Filter by severity
#!/bin/bash
trivy image --severity HIGH,CRITICAL ruby:2.4.0 
  • Filter unfixed vulnerabilities
#!/bin/bash
trivy image --ignore-unfixed ruby:2.4.0 
  • Specify output code
    This option is very useful in the continuous integration process, as we can specify that your pipeline ends in error when vulnerabilities of the critical type are found, but medium and high types finish properly.
#!/bin/bash
trivy image --exit-code 0 --severity MEDIUM,HIGH ruby:2.4.0
trivy image --exit-code 1 --severity CRITICAL ruby:2.4.0 
  • Ignore specific vulnerabilities
    You can specify those CVEs you want to ignore by using the .trivyignore file. This can be useful if the image contains a vulnerability that does not affect your development.
#!/bin/bash
cat .trivyignore
# Accept the risk
CVE-2018-14618

# No impact in our settings
CVE-2019-1543 
  • Export output in JSON format:
    This option is useful if you want to automate a process before an output, display the results in a custom front end, or persist the output in a structured format.
#!/bin/bash
trivy image -f json -o results.json golang:1.12-alpine
cat results.json | jq 
  • Export output in SARIF format:
    There is a standard called SARIF (Static Analysis Results Interchange Format) that defines the format for outputs that any vulnerability analysis tool should have.
#!/bin/bash
wget https://raw.githubusercontent.com/aquasecurity/trivy/master/contrib/sarif.tpl
trivy image --format template --template "@sarif.tpl" -o report-golang.sarif  golang:1.12-alpine
cat report-golang.sarif   

VS Code has the sarif-viewer extension for viewing vulnerabilities.

Continuous integration processes

Trivy has templates for the leading CI/CD solutions:

  • GitHub Actions
  • Travis CI
  • CircleCI
  • GitLab CI
  • AWS CodePipeline
#!/bin/bash
$ cat .gitlab-ci.yml
stages:
  - test

trivy:
  stage: test
  image: docker:stable-git
  before_script:
    - docker build -t trivy-ci-test:${CI_COMMIT_REF_NAME} .
    - export VERSION=$(curl --silent "https://api.github.com/repos/aquasecurity/trivy/releases/latest" | grep '"tag_name":' | sed -E 's/.*"v([^"]+)".*/\1/')
    - wget https://github.com/aquasecurity/trivy/releases/download/v${VERSION}/trivy_${VERSION}_Linux-64bit.tar.gz
    - tar zxvf trivy_${VERSION}_Linux-64bit.tar.gz
  variables:
    DOCKER_DRIVER: overlay2
  allow_failure: true
  services:
    - docker:stable-dind
  script:
    - ./trivy --exit-code 0 --severity HIGH --no-progress --auto-refresh trivy-ci-test:${CI_COMMIT_REF_NAME}
    - ./trivy --exit-code 1 --severity CRITICAL --no-progress --auto-refresh trivy-ci-test:${CI_COMMIT_REF_NAME} 

Interpreting the analysis

#!/bin/bash
trivy image httpd:2.2-alpine
2020-10-24T09:46:43.186+0200    INFO    Need to update DB
2020-10-24T09:46:43.186+0200    INFO    Downloading DB...
18.63 MiB / 18.63 MiB [---------------------------------------------------------] 100.00% 8.78 MiB p/s 3s
2020-10-24T09:47:08.571+0200    INFO    Detecting Alpine vulnerabilities...
2020-10-24T09:47:08.573+0200    WARN    This OS version is no longer supported by the distribution: alpine 3.4.6
2020-10-24T09:47:08.573+0200    WARN    The vulnerability detection may be insufficient because security updates are not provided

httpd:2.2-alpine (alpine 3.4.6)
===============================
Total: 32 (UNKNOWN: 0, LOW: 0, MEDIUM: 15, HIGH: 14, CRITICAL: 3)

+-----------------------+------------------+----------+-------------------+------------------+--------------------------------+
|        LIBRARY        | VULNERABILITY ID | SEVERITY | INSTALLED VERSION |  FIXED VERSION   |             TITLE              |
+-----------------------+------------------+----------+-------------------+------------------+--------------------------------+
| libcrypto1.0          | CVE-2018-0732    | HIGH     | 1.0.2n-r0         | 1.0.2o-r1        | openssl: Malicious server can  |
|                       |                  |          |                   |                  | send large prime to client     |
|                       |                  |          |                   |                  | during DH(E) TLS...            |
+-----------------------+------------------+----------+-------------------+------------------+--------------------------------+
| postgresql-dev        | CVE-2018-1115    | CRITICAL | 9.5.10-r0         | 9.5.13-r0        | postgresql: Too-permissive     |
|                       |                  |          |                   |                  | access control list on         |
|                       |                  |          |                   |                  | function pg_logfile_rotate()   |
+-----------------------+------------------+----------+-------------------+------------------+--------------------------------+
| libssh2-1             | CVE-2019-17498   | LOW      | 1.8.0-2.1         |                  | libssh2: integer overflow in   |
|                       |                  |          |                   |                  | SSH_MSG_DISCONNECT logic in    |
|                       |                  |          |                   |                  | packet.c                       |
+-----------------------+------------------+----------+-------------------+------------------+--------------------------------+ 
  • Library: the library/package identifying the vulnerability.

  • Vulnerability ID: vulnerability identifier (according to CVE standard).

  • Severity: there is a classification with 5 typologies [source] which are assigned a CVSS (Common Vulnerability Scoring System) score:

    • Critical (CVSS Score 9.0-10.0): flaws that could be easily exploited by a remote unauthenticated attacker and lead to system compromise (arbitrary code execution) without requiring user interaction.

    • High (CVSS score 7.0-8.9): flaws that can easily compromise the confidentiality, integrity or availability of resources.

    • Medium (CVSS score 4.0-6.9): flaws that may be more difficult to exploit but could still lead to some compromise of the confidentiality, integrity or availability of resources under certain circumstances.

    • Low (CVSS score 0.1-3.9): all other issues that may have a security impact. These are the types of vulnerabilities that are believed to require unlikely circumstances to be able to be exploited, or which would give minimal consequences.

    • Unknown (CVSS score 0.0): allocated to vulnerabilities with no assigned score.

  • Installed version: the version installed in the system analysed.

  • Fixed version: the version in which the issue is fixed. If the version is not reported, this means the fix is pending.

  • Title: A short description of the vulnerability. For further information, see the NVD.

Now you know how to interpret at the analysis information at a high level. So, what actions should you take? We give you some pointers in the Recommendations section.

Recommendations

  • This section describes some of the most important aspects within the scope of vulnerabilities in containers:

    • Avoid (wherever possible) using images in which critical and high severity vulnerabilities have been identified.
    • Include image analysis in CI processes
      Security in development is not optional; automate your testing and do not rely on manual processes.
    • Use lightweight images, fewer exposures:
      Images of the Alpine / BusyBox type are built with as few packages as possible (the base image is 5 MB), resulting in reduced attack vectors. They support multiple architectures and are updated quite frequently.
REPOSITORY  TAG     IMAGE ID      CREATED      SIZE
alpine      latest  961769676411  4 weeks ago  5.58MB
ubuntu      latest  2ca708c1c9cc  2 days ago   64.2MB
debian      latest  c2c03a296d23  9 days ago   114MB
centos      latest  67fa590cfc1c  4 weeks ago  202MB 

If for a dependencies reason, you cannot customise an Alpine base image, look for slim-type images from trusted software vendors. Apart from the security component, people who share a network with you will appreciate not having to download 1 GB images.

  • Get images from official repositories: Using DockerHub is recommended, and preferably images from official publishers. DockerHub and CVEs

  • Keep images up to date: the following example shows an analysis of two different Apache versions:

    Image published in 11/2018

httpd:2.2-alpine (alpine 3.4.6)
 Total: 32 (UNKNOWN: 0, LOW: 0, MEDIUM: 15, **HIGH: 14, CRITICAL: 3**) 

Image published in 01/2020

httpd:alpine (alpine 3.12.1)
 Total: 0 (UNKNOWN: 0, LOW: 0, MEDIUM: 0, **HIGH: 0, CRITICAL: 0**) 

As you can see, if a development was completed in 2018 and no maintenance was performed, you could be exposing a relatively vulnerable Apache. This is not an issue resulting from the use of containers. However, because of the versatility Docker provides for testing new product versions, we now have no excuse.

  • Pay special attention to vulnerabilities affecting the application layer:
    According to the study conducted by the company edgescan, 19% of vulnerabilities detected in 2018 were associated with Layer 7 (OSI Model), with XSS (Cross-site Scripting) type attacks standing out above all.

  • Select latest images with special care:
    Although this advice is closely related to the use of lightweight images, we consider it worth inserting a note on latest images:

Latest Apache image (Alpine base 3.12)

httpd:alpine (alpine 3.12.1)
 Total: 0 (UNKNOWN: 0, LOW: 0, MEDIUM: 0, HIGH: 0, CRITICAL: 0) 

Latest Apache image (Debian base 10.6)

httpd:latest (debian 10.6)
 Total: 119 (UNKNOWN: 0, LOW: 87, MEDIUM: 10, HIGH: 22, CRITICAL: 0) 

We are using the same version of Apache (2.4.46) in both cases, the difference is in the number of critical vulnerabilities.
Does this mean that the Debian base 10 image makes the application running on that system vulnerable? It may or may not be. You need to assess whether the vulnerabilities could compromise your application. The recommendation is to use the Alpine image.

  • Evaluate the use of Docker distroless images
    The distroless concept is from Google and consists of Docker images based on Debian9/Debian10, without package managers, shells or utilities. The images are focused on programming languages (Java, Python, Golang, Node.js, dotnet and Rust), containing only what is required to run the applications. As they do not have package managers, you cannot install your own dependencies, which can be a big advantage or in other cases a big obstacle. Do testing and if it fits your project requirements, go ahead; it is always useful to have alternatives. Maintenance is Google’s responsibility, so the security aspect will be well-defined.

Container vulnerability scanner ecosystem

In our case we have used Trivy as it is a reliable, stable, open source tool that is being developed continually, but there are numerous tools for container analysis:
  • Clair
  • Snyk
  • Anchore Cloud
  • Docker Bench
  • Docker Scan
Do you want to know more about what we offer and to see other success stories?
DISCOVER BLUETAB
Ángel Maroco
AWS Cloud Architect

My name is Ángel Maroco and I have been working in the IT sector for over a decade. I started my career in web development and then moved on for a significant period to IT platforms in banking environments and have been working on designing solutions in AWS environments for the last 5 years.

I now combine my role as an architect with being head of /bluetab Cloud Practice, with the mission of fostering Cloud culture within the company.

SOLUTIONS, WE ARE EXPERTS

DATA STRATEGY
DATA FABRIC
AUGMENTED ANALYTICS

You may be interested in

Mi experiencia en el mundo de Big Data – Parte II

February 4, 2022
READ MORE

Myths and truths of software engineers

June 13, 2022
READ MORE

Data Mesh

July 27, 2022
READ MORE

CDKTF: Otro paso en el viaje del DevOps, introducción y beneficios.

May 9, 2023
READ MORE

We have a Plan B

September 17, 2020
READ MORE

Mi experiencia en el mundo de Big Data – Parte I

October 14, 2021
READ MORE

Filed Under: Blog, Practices, Tech

Using Large Language Models on Private Information

March 11, 2024 by Bluetab

Roger Pou Lopez
Data Scientist

A RAG, acronym for ‘Retrieval Augmented Generation,’ represents an innovative strategy within natural language processing. It integrates with Large Language Models (LLMs), such as those used by ChatGPT internally (GPT-3.5-turbo or GPT-4), with the aim of enhancing response quality and reducing certain undesired behaviors, such as hallucinations.

https://www.superannotate.com/blog/rag-explained

These systems combine the concepts of vectorization and semantic search, along with LLMs, to augment their knowledge with external information that was not included during their training phase and thus remains unknown to them.

There are certain points in favor of using RAGs:

  • They allow for reducing the level of hallucinations exhibited by the models. Often, LLMs respond with incorrect (or invented) information, although semantically their response makes sense. This is referred to as hallucination. One of the main objectives of RAG is to try to minimize these types of situations as much as possible, especially when asking about specific things. This is highly useful if one wants to use an LLM productively.
  • Using a RAG, it is no longer necessary to retrain the LLM. This process can become economically costly, as it would require GPUs for training, in addition to the complexity that training may entail.
  • They are economical, fast (utilizing indexed information), and furthermore, they do not depend on the model being used (at any time, we can switch from GPT-3.5 to Llama-2-70B).

Drawbacks:

  • Assistance with code, mathematics, and it won’t be as straightforward as launching a simple modified prompt will be required.
  • In the evaluation of RAGs (we will see later in the article), we will need powerful models like GPT-4.

Example Use Case

There are several examples where RAGs are being utilized. The most typical example is their use with chatbots to inquire about very specific business information.

  • In call centers, agents are starting to use a chatbot with information about rates to respond quickly and effectively to the calls they receive.
  • In chatbots, as sales assistants where they are gaining popularity. Here, RAGs help respond to product comparisons or when specifically asked about a service, making recommendations for similar products.

Components of a RAG

https://zilliz.com/learn/Retrieval-Augmented-Generation

Let’s discuss in detail the different components that make up a RAG to have a rough idea, and then we’ll talk about how these elements interact with each other.

Knowledge Base

This element is a somewhat open but also logical concept: it refers to objective knowledge of which we know that the LLM is not aware and that has a high risk of hallucination. This knowledge, in text format, can come in many formats: PDF, Excel, Word, etc… Advanced RAGs are also capable of detecting knowledge in images and tables.

In general, all content will be in text format and will need to be indexed. Since human texts are often unstructured, we resort to subdividing the texts using strategies called chunking.

Embedding Model

An embedding is the vector representation generated by a neural network trained on a dataset (text, images, sound, etc.) that is capable of summarizing the information of an object of that same type into a vector within a specific vector space.

For example, in the case of a text referring to ‘I like blue rubber ducks’ and another that says ‘I love yellow rubber ducks,’ when converted into vectors, they will be closer in distance to each other than a text referring to ‘The cars of the future are electric cars.’

This component is what will subsequently allow us to index the different chunks of text information correctly.

Vector Database

This is the place where we are going to store and index the vector information of the chunks through the embeddings. It is a very important and complex component where, fortunately, there are already several open-source solutions that are very valid to deploy it ‘easily,’ such as Milvus or Chroma.

LLM

It is logical, since the RAG is a solution that allows us to help respond more accurately to these LLMs. We don’t have to restrict ourselves to very large and efficient models (but not economical like GPT-4), but they can be smaller and more ‘simple’ models in terms of response quality and number of parameters.

Below we can see a representative image of the process of loading information into the vector database.

https://python.langchain.com/docs/use_cases/question_answering/

High-Level Operation

Now that we have a clearer understanding of the puzzle pieces, some questions arise:

  • How do these components interact with each other?
  • Why is a vector database necessary?

Let’s try to clarify the matter a bit.

https://www.hopsworks.ai/dictionary/retrieval-augmented-generation-llm

The intuitive idea of how a RAG works is as follows:

  1. The user asks a question. We transform the question into a vector using the same embedding system we used to store the chunks. This allows us to compare our question with all the information we have indexed in our vector database.
  2. We calculate the distances between the question and all the vectors we have in the database. Using a strategy, we select some of the chunks and add all this information within the prompt as context. The simplest strategy is to select a number (K) of vectors closest to the question.
  3. We pass it to the LLM to generate the response based on the contexts. That is, the prompt contains instructions + question + context returned by the Retrieval system. This is why the ‘Augmentation’ part in the RAG acronym, as we are doing prompt augmentation.
  4. The LLM generates a response based on the question we ask and the context we have passed. This will be the response that the user will see.

This is why we need an embedding and a vector database. That’s where the trick lies. If you are able to find very similar information to your question in your vector database, then you can detect content that may be useful for your question. But for all this, we need an element that allows us to compare texts objectively, and we cannot have this information stored in an unstructured way if we need to ask questions frequently.

Also, ultimately all this ends up in the prompt, which allows it to be independent of the LLM model we are going to use.

Evaluation of RAGs

In the same way as classical statistical or data science models, we have a need to quantify how a model is performing before using it productively.

The most basic strategy (for example, to measure the effectiveness of a linear regression) involves dividing the dataset into different parts such as train and test (80 and 20% respectively), training the model on train and evaluating on test with metrics like root-mean-square error, since the test set contains data that the model hasn’t seen. However, a RAG does not involve training but rather a system composed of different elements where one of its parts is using a text generation model.

Beyond this, here we don’t have quantitative data (i.e., numbers) and the nature of the data consists of generated text that can vary depending on the question asked, the context detected by the Retrieval system, and even the non-deterministic behavior of neural network models.

One basic strategy we can think of is to manually analyze how well our system is performing, based on asking questions and observing how the responses and contexts returned are working. But this approach becomes impractical when we want to evaluate all the possibilities of questions in very large documents and recurrently.

So, how can we do this evaluation?

The trick: Leveraging the LLMs themselves. With them, we can build a synthetic dataset that simulates the same action of asking questions to our system, just as if a human had done it. We can even add a higher level of sophistication: using a smarter model than the previous one that functions as a critic, indicating whether what is happening makes sense or not.

Example of Evaluation Dataset

https://docs.ragas.io/en/stable/getstarted/evaluation.html

What we have here are samples of Question-Answer pairs showing how our RAG system would have performed, simulating the questions a human might ask in comparison to the model we are evaluating. To do this, we need two models: the LLM we would use in our RAG, for example, GPT-3.5-turbo (Answer), and another model with better performance to generate a ‘truth’ (Ground Truth), such as GPT-4.

In other words, in ChatGPT 3.5 would be the question generation system, and ChatGPT 4 would serve as the critical part.

Once we have generated our evaluation dataset, the next step is to quantify it numerically using some form of metric.

Evaluation Metrics

The evaluation of responses is something new, but there are already open-source projects that effectively quantify the quality of RAGs. These evaluation systems allow measuring the ‘Retrieval’ and ‘Generation’ parts separately.

https://docs.ragas.io/en/stable/concepts/metrics/index.html

Faitfulness Score

It measures the accuracy of our responses given a context. That is, what percentage of the question is true based on the context obtained through our system. This metric serves to try to control the hallucinations that LLMs may have. A very low value in this metric would imply that the model is making things up, even when given a context. Therefore, it is a metric that should be as close to one as possible.

Answer Relevancy Score

It quantifies the relevance of the response based on the question asked to our system. If the response is not relevant to what we asked, it is not answering us properly. Therefore, the higher this metric is, the better.

Context Precision Score

It evaluates whether all the elements of our ground-truth items within the contexts are ranked in priority or not.

Context Recall Score

It quantifies if the returned context aligns with the annotated response. In other words, how relevant the context is to the question we ask. A low value would indicate that the returned context is not very relevant and does not help us answer the question.

How all these metrics are being evaluated is a bit more complex, but we can find well-explained examples in the RAGAS documentation.

Practical Example using LangChain, OpenAI, and ChromaDB

We are going to use the LangChain framework, which allows us to build a RAG very easily.

The dataset we will use is an essay by Paul Graham, a typical and small dataset in terms of size.

The vector database we will use is Chroma, open-source and fully integrated with LangChain. Its use will be completely transparent, using the default parameters.

NOTE: Each call to an associated model incurs a monetary cost, so it’s advisable to review the pricing of OpenAI. We will be working with a small dataset of 10 questions, but if scaled, the cost could increase.

import os
from dotenv import load_dotenv  

load_dotenv() # Configurar OpenAI API Key

from langchain_community.document_loaders import TextLoader
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain_openai import OpenAIEmbeddings
from langchain_community.vectorstores import Chroma
from langchain.prompts import ChatPromptTemplate

embeddings = OpenAIEmbeddings(
    model="text-embedding-ada-002"
)

text_splitter = RecursiveCharacterTextSplitter(
    chunk_size = 700,
    chunk_overlap = 50
)

loader = TextLoader('paul_graham/paul_graham_essay.txt')
text = loader.load()
documents = text_splitter.split_documents(text)
print(f'Número de chunks generados gracias al documento: {len(documents)}')

vector_store = Chroma.from_documents(documents, embeddings)
retriever = vector_store.as_retriever()
Número de chunks generados gracias al documento: 158

Since the text of the book is in English, our prompt template must be in English.

from langchain.prompts import ChatPromptTemplate

template = """Answer the question based only on the following context. If you cannot answer the question with the context, please respond with 'I don't know':

Context:
{context}

Question:
{question}
"""

prompt = ChatPromptTemplate.from_template(template)

Now we are going to define our RAG using LCEL. The model we will use to respond to the questions of our RAG will be GPT-3.5-turbo. It’s important that the temperature parameter is set to 0 so that the model is not creative.

from operator import itemgetter

from langchain_openai import ChatOpenAI
from langchain_core.output_parsers import StrOutputParser
from langchain_core.runnables import RunnablePassthrough 

primary_qa_llm = ChatOpenAI(model_name="gpt-3.5-turbo", temperature=0)

retrieval_augmented_qa_chain = (
    {"context": itemgetter("question") | retriever, "question": itemgetter("question")}
    | RunnablePassthrough.assign(context=itemgetter("context"))
    | {"response": prompt | primary_qa_llm, "context": itemgetter("context")}
)

.. and now it is possible to start asking questions to our RAG system.

question = "What was doing the author before collegue? "

result = retrieval_augmented_qa_chain.invoke({"question" : question}) 

print(f' Answer the question based: {result["response"].content}')
Answer the question based: The author was working on writing and programming before college.

We can also investigate which contexts have been returned by our retriever. As mentioned, the Retrieval strategy is the default and will return the top 4 contexts to answer a question.

display(retriever.get_relevant_documents(question))
display(retriever.get_relevant_documents(question))
[Document(page_content="What I Worked On\n\nFebruary 2021\n\nBefore college the two main things I worked on, outside of school, were writing and programming. I didn't write essays. I wrote what beginning writers were supposed to write then, and probably still are: short stories. My stories were awful. They had hardly any plot, just characters with strong feelings, which I imagined made them deep.", metadata={'source': 'paul_graham/paul_graham_essay.txt'}),
 Document(page_content="Over the next several years I wrote lots of essays about all kinds of different topics. O'Reilly reprinted a collection of them as a book, called Hackers & Painters after one of the essays in it. I also worked on spam filters, and did some more painting. I used to have dinners for a group of friends every thursday night, which taught me how to cook for groups. And I bought another building in Cambridge, a former candy factory (and later, twas said, porn studio), to use as an office.", metadata={'source': 'paul_graham/paul_graham_essay.txt'}),
 Document(page_content="In the print era, the channel for publishing essays had been vanishingly small. Except for a few officially anointed thinkers who went to the right parties in New York, the only people allowed to publish essays were specialists writing about their specialties. There were so many essays that had never been written, because there had been no way to publish them. Now they could be, and I was going to write them. [12]\n\nI've worked on several different things, but to the extent there was a turning point where I figured out what to work on, it was when I started publishing essays online. From then on I knew that whatever else I did, I'd always write essays too.", metadata={'source': 'paul_graham/paul_graham_essay.txt'}),
 Document(page_content="Wow, I thought, there's an audience. If I write something and put it on the web, anyone can read it. That may seem obvious now, but it was surprising then. In the print era there was a narrow channel to readers, guarded by fierce monsters known as editors. The only way to get an audience for anything you wrote was to get it published as a book, or in a newspaper or magazine. Now anyone could publish anything.", metadata={'source': 'paul_graham/paul_graham_essay.txt'})]

Evaluating our RAG

Now that we have our RAG set up thanks to LangChain, we still need to evaluate it.

It seems that both LangChain and LlamaIndex are beginning to have easy ways to evaluate RAGs without leaving the framework. However, for now, the best option is to use RAGAS, a library that we had mentioned earlier and is specifically designed for that purpose. Internally, it will use GPT-4 as the critical model, as we mentioned earlier.

from ragas.testset.generator import TestsetGenerator
from ragas.testset.evolutions import simple, reasoning, multi_context
text = loader.load()
text_splitter = RecursiveCharacterTextSplitter(
    chunk_size = 1000,
    chunk_overlap = 200
)
documents = text_splitter.split_documents(text)

generator = TestsetGenerator.with_openai()
testset = generator.generate_with_langchain_docs(
    documents, 
    test_size=10, 
    distributions={simple: 0.5, reasoning: 0.25, multi_context: 0.25}
)
test_df = testset.to_pandas()
display(test_df)
question contexts ground_truth evolution_type episode_done
0 What is the batch model and how does it relate… [The most distinctive thing about YC is the ba… The batch model is a method used by YC (Y Comb… simple True
1 How did the use of Scheme in the new version o… [In the summer of 2006, Robert and I started w… The use of Scheme in the new version of Arc co… simple True
2 How did learning Lisp expand the author’s conc… [There weren’t any classes in AI at Cornell th… Learning Lisp expanded the author’s concept of… simple True
3 How did Moore’s Law contribute to the downfall… [[4] You can of course paint people like still… Moore’s Law contributed to the downfall of com… simple True
4 Why did the creators of Viaweb choose to make … [There were a lot of startups making ecommerce… The creators of Viaweb chose to make their eco… simple True
5 During the author’s first year of grad school … [I applied to 3 grad schools: MIT and Yale, wh… reasoning True
6 What suggestion from a grad student led to the… [McCarthy didn’t realize this Lisp could even … reasoning True
7 What makes paintings more realistic than photos? [life interesting is that it’s been through a … By subtly emphasizing visual cues, paintings c… multi_context True
8 “What led Jessica to compile a book of intervi… [Jessica was in charge of marketing at a Bosto… Jessica’s realization of the differences betwe… multi_context True
9 Why did the founders of Viaweb set their price… [There were a lot of startups making ecommerce… The founders of Viaweb set their prices low fo… simple True
test_questions = test_df["question"].values.tolist()
test_groundtruths = test_df["ground_truth"].values.tolist()
answers = []
contexts = []
for question in test_questions:
  response = retrieval_augmented_qa_chain.invoke({"question" : question})
  answers.append(response["response"].content)
  contexts.append([context.page_content for context in response["context"]])

from datasets import Dataset # HuggingFace
response_dataset = Dataset.from_dict({
    "question" : test_questions,
    "answer" : answers,
    "contexts" : contexts,
    "ground_truth" : test_groundtruths
})
from ragas import evaluate
from ragas.metrics import (
    faithfulness,
    answer_relevancy,
    context_recall,
    context_precision,
)

metrics = [
    faithfulness,
    answer_relevancy,
    context_recall,
    context_precision,
]

results = evaluate(response_dataset, metrics)
results_df = results.to_pandas().dropna()
question answer contexts ground_truth faithfulness answer_relevancy context_recall context_precision
0 What is the batch model and how does it relate… The batch model is a system where YC funds a g… [The most distinctive thing about YC is the ba… The batch model is a method used by YC (Y Comb… 0.750000 0.913156 1.0 1.000000
1 How did the use of Scheme in the new version o… The use of Scheme in the new version of Arc co… [In the summer of 2006, Robert and I started w… The use of Scheme in the new version of Arc co… 1.000000 0.910643 1.0 1.000000
2 How did learning Lisp expand the author’s conc… Learning Lisp expanded the author’s concept of… [So I looked around to see what I could salvag… Learning Lisp expanded the author’s concept of… 1.000000 0.924637 1.0 1.000000
3 How did Moore’s Law contribute to the downfall… Moore’s Law contributed to the downfall of com… [[5] Interleaf was one of many companies that … Moore’s Law contributed to the downfall of com… 1.000000 0.940682 1.0 1.000000
4 Why did the creators of Viaweb choose to make … The creators of Viaweb chose to make their eco… [There were a lot of startups making ecommerce… The creators of Viaweb chose to make their eco… 0.666667 0.960447 1.0 0.833333
5 What suggestion from a grad student led to the… The suggestion from grad student Steve Russell… [McCarthy didn’t realize this Lisp could even … The suggestion from a grad student, Steve Russ… 1.000000 0.931730 1.0 0.916667
6 What makes paintings more realistic than photos? By subtly emphasizing visual cues such as the … [copy pixel by pixel from what you’re seeing. … By subtly emphasizing visual cues, paintings c… 1.000000 0.963414 1.0 1.000000
7 “What led Jessica to compile a book of intervi… Jessica was surprised by how different reality… [Jessica was in charge of marketing at a Bosto… Jessica’s realization of the differences betwe… 1.000000 0.954422 1.0 1.000000
8 Why did the founders of Viaweb set their price… The founders of Viaweb set their prices low fo… [There were a lot of startups making ecommerce… The founders of Viaweb set their prices low fo… 1.000000 1.000000 1.0 1.000000

We visualize the statistical distributions that emerge.

results_df.plot.hist(subplots=True,bins=20)

We can observe that the system is not perfect even though we have generated only 10 questions (more would be needed) and it can also be seen that in one of them, the RAG pipeline has failed to create the ground truth.

Nevertheless, we could draw some conclusions:

  • Sometimes it is not able to provide very faithful responses.
  • The relevance of the response varies but consistently good.
  • The context recall is perfect but the context precision is not as good.

Now, here we can consider trying different elements:

  • Changing the embedding used to one that we can find in the HuggingFace MTEB Leaderboard.
  • Improving the retrieval system with different strategies than the default.
  • Evaluating with other LLMs.

With these possibilities, it is feasible to analyze each of these previous strategies and choose the one that best fits our data or monetary criteria.

Conclusions

In this article, we have seen what a RAG consists of and how we can evaluate a complete workflow. This subject matter is currently booming as it is one of the most effective and cost-effective alternatives to avoid fine-tuning LLMs.

It is possible that new metrics, new frameworks, will make the evaluation of these simpler and more effective, but in the next articles, we will not only be able to see their evolution but also how to bring a RAG-based architecture into production.

Table Of Contents
  1. Components of a RAG
  2. High-Level Operation
  3. Evaluation of RAGs
  4. Evaluation Metrics
  5. Conclusions

Filed Under: Blog, Practices, Tech

  • Page 1
  • Page 2
  • Page 3
  • Interim pages omitted …
  • Page 18
  • Go to Next Page »

Footer

LegalPrivacy Cookies policy

Patron

Sponsor

© 2025 Bluetab Solutions Group, SL. All rights reserved.