<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">"""
Inria Raweb dataset with Pipeline API
============================================

In this example we will process Inria Raweb dataset using `Pipeline` API. 

The pipeline will comprise of the following steps:

- extract entities 
- use Latent Semantic Analysis (LSA) to generate n-dimensional vector 
  representation of the entities
- use Uniform Manifold Approximation and Projection (UMAP) to project those 
  entities in 2 dimensions
- use KMeans clustering to cluster entities
- find their nearest neighbors.

All files necessary to run Inria Raweb pipeline can be downloaded from https://zenodo.org/record/7970984.
"""

###############################################################################
# Create Inria Raweb Dataset
# =======================
#
# We will first create Dataset for Inria Raweb.
#
# The CSV file `rawebdf.csv` contains the dataset data.

from cartodata.pipeline.datasets import CSVDataset  # noqa
from pathlib import Path # noqa

ROOT_DIR = Path.cwd().parent
# The directory where files necessary to load dataset columns reside
INPUT_DIR = ROOT_DIR / "datas"
# The directory where the generated dump files will be saved
TOP_DIR = ROOT_DIR / "dumps"

dataset = CSVDataset("inriaraweb", input_dir=INPUT_DIR, version="1.0.0", filename="raweb.csv",
                     fileurl="https://zenodo.org/record/7970984/files/raweb.csv")

dataset.df.head()

###############################################################################
# The dataframe that we just read consists of 118455 rows.

dataset.df.shape[0]

###############################################################################
# And has name, team, teamyear, year, center, theme, text and keywords as columns.

print(*dataset.df.columns, sep="\n")

###############################################################################
# Now we should define our entities and set the column names corresponding to those entities from the data file. We have 7 entities:
#
# | entity | column name in the file |
# ---------|-------------|
# | rawebpart | name |
# | teams | teams |
# | cwords | text |
# | teamyear | teamyear |
# | center | center |
# | theme | theme |
# | words | text |
#
#
# Cartolabe provides 4 types of columns:
#
#
# - **IdentityColumn**: The entity of this column represents the main entity of the dataset. The column data corresponding to the entity in the file should contain a single value and this value should be unique among column values. There can only be one `IdentityColumn` in the dataset.
# - **CSColumn**: The entity of this column type is related to the main entity, and can contain single or comma separated values.
# - **CorpusColumn**: The entity of this column type is the corpus related to the main entity. This can be a combination of multiple columns in the file. It uses a modified version of CountVectorizer(https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html#sklearn.feature_extraction.text.CountVectorizer).
# - **TfidfCorpusColumn**: The entity of this column type is the corpus related to the main entity. This can be a combination of multiple columns in the file or can contain filepath from which to read the text corpus. It uses TfidfVectorizer (https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.TfidfVectorizer.html).
#
# To define the columns we will need additinal files, stopwords_raweb.txt and inriavocab.csv. We can download them from Zenodo and save under `../datas` directory.
#

from download import download  # noqa

stopwords_url = "https://zenodo.org/record/7970984/files/stopwords_raweb.txt"
vocab_url = "https://zenodo.org/record/7970984/files/inriavocab.csv"

download(stopwords_url, INPUT_DIR / "stopwords_raweb.txt", kind='file',
         progressbar=True, replace=False)

download(vocab_url, INPUT_DIR / "inriavocab.csv", kind='file',
         progressbar=True, replace=False)

###############################################################################
# In this dataset, **rawebpart** is our main entity. We will define it as IdentityColumn:

from cartodata.pipeline.columns import IdentityColumn, CSColumn, CorpusColumn  # noqa


rawebpart_column = IdentityColumn(nature="rawebpart", column_name="name")

teams_column = CSColumn(nature="teams", column_name="team",
                        filter_min_score=4)

cwords_column = CorpusColumn(nature="cwords", column_names=["text"],
                             stopwords="stopwords_raweb.txt", nb_grams=4,
                             min_df=25, max_df=0.05, normalize=True,
                             vocabulary="inriavocab.csv")

teamyear_column = CSColumn(nature="teamyear", column_name="teamyear",
                           filter_min_score=4)

center_column = CSColumn(nature="center", column_name="center", separator=";")

theme_column = CSColumn(nature="theme", column_name="theme", separator=";",
                        filter_nan=True)

words_column = CorpusColumn(nature="words", column_names=["text"],
                            stopwords="stopwords_raweb.txt", nb_grams=4,
                            min_df=25, max_df=0.1, normalize=True)

###############################################################################
# Now we are going to set the columns of the dataset:

dataset.set_columns([rawebpart_column, teams_column, cwords_column, teamyear_column,
                     center_column, theme_column, words_column])

###############################################################################
# We can set the columns in any order that we prefer. We will set the first entity as identity entity and the last entity as the corpus. If we set the entities in a different order, the `Dataset` will put the main entity as first.
#
# The dataset for Inria Raweb data is ready. Now we will create and run our pipeline. For this pipeline, we will:
#
# - run LSA projection -&gt; N-dimesional
# - run UMAP projection  -&gt; 2D
# - cluster entities
# - find nearest neighbors

###############################################################################
# Create and run pipeline
# =========================
#
# We will first create a pipeline with the dataset.

from cartodata.pipeline.common import Pipeline  # noqa

pipeline = Pipeline(dataset=dataset, top_dir=TOP_DIR, input_dir=INPUT_DIR)

###############################################################################
# The workflow generates the `natures` from dataset columns.

pipeline.natures

###############################################################################
# Creating correspondance matrices for each entity type
# -------------------------------------------------------------------------------
#
# Now we want to extract matrices that will map the correspondance between each name in the dataset and the entities we want to use.
#
# Pipeline has `generate_entity_matrices` function to generate matrices and scores for each entity (nature) specified for the dataset.

matrices, scores = pipeline.generate_entity_matrices(force=True)

###############################################################################
# **Rawebpart**
#
# The first matrix in matrices and Series in scores corresponds to **rawebpart**.
#
# The type for tout column is `IdentityColumn`. It generates a matrix that simply maps each row entry to itself.

rawebpart_mat = matrices[0]
rawebpart_mat.shape

###############################################################################
# Having type `IdentityColumn`, each item will have score 1.

rawebpart_scores = scores[0]
rawebpart_scores.shape

rawebpart_scores.head()

###############################################################################
# **Teams**
#
# The second matrix in matrices and score in scores correspond to **teams**.
#
# The type for teams is `CSColumn`. It generates a sparce matrix where rows correspond to rows and columns corresponds to the teams obtained by separating comma separated values.

teams_mat = matrices[1]
teams_mat.shape

teams_scores = scores[1]
teams_scores.head()

teams_scores.shape

###############################################################################
# **Cwords**
#
# The third matrix in matrices and score in scores correspond to **cwords**.
#
# The type for cwords column is `CorpusColumn`. It uses text column in the dataset, and then extracts n-grams from that corpus using a fixed vocabulary `../datas/inriavocab.csv`. Finally it generates a sparce matrix where rows correspond to each entry in the dataset and columns corresponds to n-grams.

cwords_mat = matrices[2]
cwords_mat.shape

cwords_scores = scores[2]
cwords_scores.head()

###############################################################################
# **Teamyear**
#
# The fourth matrix in matrices and score in scores correspond to **teamyear**.
#
# The type for teamyear is `CSColumn`. It generates a sparce matrix where rows correspond to rows and columns corresponds to the team year values obtained by separating comma separated values.

teamyear_mat = matrices[3]
teamyear_mat.shape

teamyear_scores = scores[3]
teamyear_scores.head()

###############################################################################
# **Center**
#
# The fifth matrix in matrices and score in scores correspond to **center**.
#
# The type for center is `CSColumn`. It generates a sparce matrix where rows correspond to rows and columns corresponds to the centers.

center_mat = matrices[4]
center_mat.shape

center_scores = scores[4]
center_scores.head()

###############################################################################
# **Theme**
#
# The sixth matrix in matrices and score in scores correspond to **theme**.
#
# The type for theme is `CSColumn`. It generates a sparce matrix where rows correspond to rows and columns corresponds to the theme obtained by separating comma separated values.

theme_mat = matrices[5]
theme_mat.shape

theme_scores = scores[5]
theme_scores.head()

###############################################################################
# **Words**
#
# The seventh matrix in matrices and score in scores correspond to **words**.
#
# The type for words column is `CorpusColumn`. It creates a corpus merging multiple text columns in the dataset, and then extracts n-grams from that corpus. Finally it generates a sparce matrix where rows correspond to articles and columns corresponds to n-grams.

words_mat = matrices[6]
words_mat.shape

###############################################################################
# Here we see that there are 56532 distinct n-grams.
#
# The series, which we named `words_scores`, contains the list of n-grams
# with a score that is equal to the number of rows that this value
# was mapped within the `words_mat` matrix.

words_scores = scores[6]
words_scores.head(10)

###############################################################################
# Dimension reduction
# ------------------------------
#
# One way to see the matrices that we created is as coordinates in the space of
# all articles. What we want to do is to reduce the dimension of this space to
# make it easier to work with and see.
#
# **LSA projection**
#
# We'll start by using the LSA (Latent Semantic Analysis) technique to reduce the number of rows in our data.

from cartodata.pipeline.projectionnd import LSAProjection  # noqa

num_dim = 100

lsa_projection = LSAProjection(num_dim)
pipeline.set_projection_nd(lsa_projection)

###############################################################################
# Now we can run LSA projection on the matrices.
#
# In our matrices we have 2 columns generated from given corpus; cwords and words. When we create the dataset and set it columns, the dataset sets the index for corpus column as corpus_index. When there are more then one columns of type `cartodata.pipeline.columns.CorpusColumn`, the index of the final one is set as corpus_index. In our case 6.
#
# We would like to use cwords column as corpus column for LSA projection. So before running the projection, we should set the corpus_index.

pipeline.dataset.corpus_index = 2

""
matrices_nD = pipeline.do_projection_nD(matrices, force=True)

for nature, matrix in zip(pipeline.natures, matrices_nD):
    print(f"{nature}  -------------   {matrix.shape}")

###############################################################################
# We have 100 rows for each entity.
#
# This makes it easier to work with them for clustering or nearest neighbors
# tasks, but we also want to project them on a 2D space to be able to map them.
#
# **UMAP projection**
#
# The `UMAP &lt;https://github.com/lmcinnes/umap&gt;`_ (Uniform Manifold Approximation
# and Projection) is a dimension reduction technique that can be used for
# visualisation similarly to t-SNE.
#
# We use this algorithm to project our matrices in 2 dimensions.

from cartodata.pipeline.projection2d import UMAPProjection  # noqa


umap_projection = UMAPProjection(n_neighbors=10, min_dist=0.1)

pipeline.set_projection_2d(umap_projection)

###############################################################################
# Now we can run UMAP projection on the LSA matrices.

matrices_2D = pipeline.do_projection_2D(force=True)

###############################################################################
# Now that we have 2D coordinates for our points, we can try to plot them to
# get a feel of the data's shape.

labels = tuple(pipeline.natures)
colors = ['darkgreen', 'red', 'cyan', 'navy',
          'peru', 'gold', 'pink', 'cornflowerblue']

fig, ax = pipeline.plot_map(matrices_2D, labels, colors)

###############################################################################
# The plot above, as we don't have labels for the points, doesn't make much sense
# as is. But we can see that the data shows some clusters which we could try to identify.
#
# Clustering
# ---------------
#
# In order to identify clusters, we use the KMeans clustering technique on the
# articles. We'll also try to label these clusters by selecting the most
# frequent words that appear in each cluster's articles.

from cartodata.pipeline.clustering import KMeansClustering  # noqa

###############################################################################
# level of clusters, hl: high level, ml: medium level, ll: low level
cluster_natures = ["hl_clusters", "ml_clusters", "ll_clusters", "vll_clusters"]

kmeans_clustering = KMeansClustering(
    n=8, base_factor=3, natures=cluster_natures)

pipeline.set_clustering(kmeans_clustering)

###############################################################################
# Now we can run clustering on the matrices.

(clus_nD, clus_2D, clus_scores, cluster_labels,
cluster_eval_pos, cluster_eval_neg) = pipeline.do_clustering()

###############################################################################
# As we have specified two levels of clustering, the returned lists wil have two values.

len(clus_2D)

###############################################################################
# We will now display two levels of clusters in separate plots, we will start with high level clusters:

clus_scores_hl = clus_scores[0]
clus_mat_hl = clus_2D[0]


fig_hl, ax_hl = pipeline.plot_map(matrices_2D, labels, colors,
                                  title="Inria Raweb Dataset High Level Clusters",
                                  annotations=clus_scores_hl.index,
                                  annotation_mat=clus_mat_hl)

###############################################################################
# The 8 high level clusters that we created give us a general idea of what the big
# clusters of data contain.
#
# With medium level clusters we have a finer level of detail:

clus_scores_ml = clus_scores[1]
clus_mat_ml = clus_2D[1]

fig_ml, ax_ml = pipeline.plot_map(matrices_2D, labels, colors,
                                  title="Inria Raweb Dataset Medium Level Clusters",
                                  annotations=clus_scores_ml.index,
                                  annotation_mat=clus_mat_ml)
""
pipeline.save_plot(fig_hl, "inriaraweb_hl_clusters.png")
pipeline.save_plot(fig_ml, "inriaraweb_ml_clusters.png")


for file in pipeline.top_dir.glob("*.png"):
    print(file)

###############################################################################
# Nearest neighbors
# ----------------------------
#
# One more thing which could be useful to appreciate the quality of our data
# would be to get each point's nearest neighbors.
#
# Finding nearest neighbors is a common task with various algorithms aiming to
# solve it. The `find_neighbors` method uses one of these algorithms to find the
# nearest points of all entities. It takes an optional weight parameter to tweak
# the distance calculation to select points that have a higher score but are
# maybe a bit farther instead of just selecting the closest neighbors.

from cartodata.pipeline.neighbors import AllNeighbors  # noqa

n_neighbors = 10
weights = [0, 0, 0, 0, 0, 0, 0.3]

neighboring = AllNeighbors(n_neighbors=n_neighbors, power_scores=weights)

pipeline.set_neighboring(neighboring)

pipeline.find_neighbors()


###############################################################################
# Export file using exporter
# =======================
#
# We can now export the data. To export the data, we need to configure the exporter.
#
# The exported data will be the points extracted from the dataset corresponding to the entities that we have defined.
#
# In the export file, we will have the following columns for each point:
#
#
# | column | value |
# ---------|-------------|
# | nature |  one of tout, prop, question, words, theme, avispositif, avisnegatif, gdprop |
# | label | point's label |
# | score | point's score |
# | rank |  point's rank |
# | x | point's x location on the map |
# | y | point's y location on the map |
# | nn_rawebpart | neighboring entries to this point |
# | nn_teams | neighboring props to this point |
# | nn_cwords | neighboring questions to this point |
# | nn_teamyear | neighboring words to this point |
# | nn_lab | neighboring themes to this point |
# | nn_theme | neighboring avispositifs to this point |
# | nn_words | neighboring avisnegatifs to this point |
#
# we will call `pipeline.export` function. It will create `export.feather` file and save under `pipeline.top_dir`.

pipeline.export()

###############################################################################
# Let's display the contents of the file.

import pandas as pd  # noqa

df = pd.read_feather(pipeline.working_dir / "export.feather")
df.head()

###############################################################################
# This is a basic export file. For each point, we can add additional columns.

from cartodata.pipeline.exporting import (
    ExportNature, MetadataColumn
)  # noqa


meta_year_article = MetadataColumn(column="year", as_column="year",
                                   func="x.astype(str)")

ex_rawebpart = ExportNature(key="rawebpart",
                            refs=["center", "teams",
                                  "cwords", "theme", "words"],
                            add_metadata=[meta_year_article])

ex_teams = ExportNature(key="teams",
                        refs=["center", "cwords", "theme", "words"])

ex_teamyear = ExportNature(key="teamyear",
                           refs=["center", "teams", "cwords", "theme", "words"])

""
pipeline.export(export_natures=[ex_rawebpart, ex_teams, ex_teamyear])

""
df = pd.read_feather(pipeline.working_dir / "export.feather")
df.head(5)

""
df[df.nature == "rawebpart"].head(1)

""
df[df.nature == "teams"].head(1)

""
df[df.nature == "teamyear"].head(5)
</pre></body></html>