Preprocessing Steps¶
In this notebook, I will outline a couple of key preprocessing steps we could possibly use in order to acquire a better representation of our data.
import sys
import numpy as np
import pandas as pd
sys.path.insert(0, '/home/emmanuel/projects/2020_ml_ocn/ml4ocean/src')
from data.make_dataset import DataLoad
import matplotlib.pyplot as plt
import seaborn as sns
plt.style.use('ggplot')
%load_ext autoreload
%autoreload 2
North Atlantic¶
Outputs¶
dataloader = DataLoad()
X, Y = dataloader.load_control_data('na')
X = X[dataloader.core_vars + dataloader.loc_vars]
Y = Y.drop(dataloader.meta_vars, axis=1)
import matplotlib.colors as colors
def plot_bbp_profile(dataframe: pd.DataFrame):
norm = colors.LogNorm(vmin=dataframe.values.min()+1e-10, vmax=dataframe.values.max())
fig, ax = plt.subplots(figsize=(50,50))
ax.imshow(dataframe.T, cmap='viridis', norm=norm)
plt.show()
plot_bbp_profile(Y)
Log Transformation¶
This is a very simple but a very common transformation used in the sciences. I think it will highlight some of aspects near the surface because it essentially scales the regions near the beginning of the distribution. This is very similar to what we observed in our outputs. It might not be necessary for all of our inputs but it will probably be necessary for the outputs. I have heard the transforming the Mixed Layer Depth (MLD) might also improve the representation as well.
np.log(Y).describe()
Standard Preprocessing¶
Remove the Mean¶
The is a super common transformation and it's typically the transformation we start with. It just involves removing the mean, \mu.
We can use the sklearn function sklearn.preprocessing.StandardScaler()
to perform the normalization.
Note: I don't tranasform the lat, lon, doy coordinates. I think think there are smarter transformations for those variables. Outlined below.
from sklearn.preprocessing import StandardScaler, MinMaxScaler
from sklearn.compose import ColumnTransformer, make_column_transformer
y_normalizer = MinMaxScaler()
Y_ = y_normalizer.fit_transform(Y)
Y_norm = Y.copy()
Y_norm[:] = Y_
Y_norm.describe()
plot_bbp_profile(Y_norm)
CLustering¶
from sklearn.cluster import KMeans
from sklearn.preprocessing import KBinsDiscretizer
from features.build_features import get_geodataframe
from visualization.visualize import plot_geolocations
clf = KMeans(init='k-means++', n_clusters=4, n_init=10, max_iter=1_000, verbose=None)
clf.fit(Y)
clusters = clf.predict(Y)
clusters.shape
clusters_geo = pd.DataFrame(clusters, columns=['clusters'])
clusters_geo['lat'] = X['lat']
clusters_geo['lon'] = X['lon']
clusters_geo = get_geodataframe(clusters_geo)
# plot clusters
plot_geolocations(clusters_geo[clusters_geo['clusters']==0], color='red')
plot_geolocations(clusters_geo[clusters_geo['clusters']==1], color='blue')
plot_geolocations(clusters_geo[clusters_geo['clusters']==2], color='orange')
plot_geolocations(clusters_geo[clusters_geo['clusters']==3], color='orange')
plot_geolocations(clusters_geo[clusters_geo['clusters']==4], color='orange')
Outputs¶
We can use the sklearn function sklearn.
to create
Log Transform¶
Standard Scaling (Mean only...)¶
KBinsDiscretization¶
from sklearn.preprocessing import KBinsDiscretizer
n_bins = 10
strategy = 'quantile'
encode = 'ordinal'
transformer = KBinsDiscretizer(n_bins=n_bins, strategy=strategy, encode=encode)
Y_c = Y.copy()
Y_bin = transformer.fit_transform(Y)
Y_bin[:10]
plot_bbp_profile(Y_bin)
clusters_geo = pd.DataFrame(clusters, columns=['clusters'])
clusters_geo['lat'] = X['lat']
clusters_geo['lon'] = X['lon']
clusters_geo = get_geodataframe(clusters_geo)