MagmaSharp – .NET High Level API for MAGMA


Introduction

Few weeks ago, I was doing research and I needed a fast program for Singular Value Decomposition. I have SVD implementation in my open source project called Daany which is using the SVD implementation of Accord.NET great Machine Learning Framework. However, the decomposition is working fine and smooth for small matrices with few hundreds rows/cols but for matrices with more than 500 rows and columns it is pretty slow. So I was forced to think about of using different library in order to speed up the SVD calculation. I could use some of python libraries eg. TensorFlow, PyTorch or SciPy or similar libraries from R and c++. I have used such libraries and I know how they are fast. But I still wanted to have approximately same speed on .NET as well.

Then I decided to look how can I use some of available c++ based libraries. Once I switch to c++ based project I would not be able to use .NET framework where other parts of my research are implemented. So only solution was to implement a wrapper around a c++ library and use pInvoke in order to expose required methods in C# code.

The first idea was to use LAPACK/BLAS numerical library to calculate not only SVD but whole set of Linear Algebra routines. LAPACK/BLAS libraries have long history back to 70s of the 20th century. They are proved to be very fast and reliable. However they are not supported for GPU.

Then I came to MAGMA which is nothing but LAPACK for GPU. MAGMA is very complex and fast library which requires CUDA. However if the machine has no CUDA, the library cannot be used.

The I decided to use hybrid approach and use MAGMA whenever the machine has CUDA, otherwise use LAPACK as computation engine. This approach is the most complex and required advance skills in C++ and C#. So after a more than a month of the implementation the MagmaSharp is published as GitHub open source project with the fist public release MagmaSharp 0.02.01 at Nuget.org.

MagmaSharp v0.02.01

The first release of MagmaSharp supports MAGMA Device routines: Currently the library supports MAGMA driver routines for general rectangular matrix:

  1. gesv – solve linear system, AX = B, A is general non-symetric matrix,
  2. gels – least square solve, AX = B, A is rectangular,
  3. geev – eigen value solver for non-symetric matrix, AX = X \lambda
  4. gesvd– singular value decomposition (SVD), A = U \sigma V^T .

The library supports float and double value types.

Software requirements

The project is build on .NET Core 3.1 and .NET Standard 2.1. It is built and tested on Windows 10 1909 only.

Software (Native Libraries) requirements

In order to compile, build and use the library the following native libraries are needed to be installed.

However, if you install the MagmaSharp as Nuget package, both libraries are included, so you don’t have to install it.

How to use MagmaSharp

MagmaSharp is packed as Nuget and can be added to your .NET project as ordinary .NET component. You don’t have to worry about native libraries and dependencies. Everything is included in the package. The package can be installed from this link, or just search for MagmaSharp.

How to Build MagmaSharp from the source

  1. Download the MagmaSharp source code from the GitHub page.

  2. Reference Magma static library and put it to folder MagmaLib. Magma static library can be downloaded and built from the Official site.

  3. Open ‘MagmaSharp.sln’ with Visual Studio 2019.

  4. Make sure the building architecture is x64.

  5. Restore Nuget packages.

  6. Build and run the Solution.

How to start with MagmaSharp

The best way to start with MahmaSharp is to take a look at the MagmaSharp.XUnit project, there is a small example how to use each of the implemented method with or without CUDA device.

Advertisement

Building Predictive Maintenance Model Using ML.NET


Summary

This C# notebook is a continuation from the previous blog post Predictive Maintenance on .NET Platform.

The notebook is completely implemented on .NET platform using C# Jupyter Notebook and Daany – C# data analytics library. There are small differences between this notebook and the notebooks at the official azure gallery portal, but in most cases, the code follows the steps defined there.

The notebook shows how to use .NET Jupyter Notebook with Daany.DataFrame and ML.NET in order to prepare the data and build the Predictive Maintenance Model on .NET platform.

Description

In the previous post, we analyzed 5 data sets with information about telemetry, data, errors and maintenance as well as failure for 100 machines. The data were transformed and analyzed in order to create the final data set for building a machine learning model for Predictive maintenance.

Once we created all features from the data sets, as a final step we created the label column so that it describes if a certain machine will fail in the next 24 hours due to failure a component1, component2, component3, component4 or it will continue to work. . In this part, we are going to perform a part of the machine learning task and start training a machine learning model for predicting if a certain machine will fail in the next 24 hours due to failure, or it will be in functioning normal in that time period.

The model which we are going to build is multi-class classification model sice it has 5 values to predict:

  • component1,
  • component2,
  • component3,
  • component4 or
  • none – means it will continue to work.

ML.NET framework as library for training

In order to train the model, we are going to use ML.NET – Microsoft open source framework for Machine Learning on .NET Platform. First we need to put some preparation codes like:

  • Required Nuget packages,
  • Set of using statements and code for formatting the output:

At the beggining of this notebook, we installed the several NugetPackages in order to complete this notebook. The following code shows using statements, and method for formatting the data from the DataFrame.

//using Microsoft.ML.Data;
using XPlot.Plotly;
using System;
using System.Collections.Generic;
using System.Drawing;
using System.Linq;

//
using Microsoft.ML;
using Microsoft.ML;
using Microsoft.ML.Data;
using Microsoft.ML.Transforms;
using Microsoft.ML.Trainers.LightGbm;
//
using Daany;
using Daany.Ext;
//DataFrame formatter
using Microsoft.AspNetCore.Html;
Formatter.Register((df, writer) =>
{
    var headers = new List();
    headers.Add(th(i("index")));
    headers.AddRange(df.Columns.Select(c => (IHtmlContent) th(c)));
    //renders the rows
    var rows = new List<List>();
    var take = 20;
    //
    for (var i = 0; i < Math.Min(take, df.RowCount()); i++)
    {
        var cells = new List();
        cells.Add(td(df.Index[i]));
        foreach (var obj in df[i]){
            cells.Add(td(obj));
        }
        rows.Add(cells);
    }
    var t = table(
        thead(
            headers),
        tbody(
            rows.Select(
                r => tr(r)))); 
    writer.Write(t);
}, "text/html");

Once we install the Nuget packages and define using statements we are going to define a class we need to create an ML.NET pipeline.

The class PrMaintenanceClass – contains the features (properties) we build in the previous post. We need them to define features in the ML.NET pipeline. The second class we defined is PrMaintenancePrediction we used for prediction and model evaluation.

class PrMaintenancePrediction
{
    [ColumnName("PredictedLabel")]
    public string failure { get; set; }
}
class PrMaintenanceClass
{
    public DateTime datetime { get; set; }
    public int machineID { get; set; }
    public float voltmean_3hrs { get; set; }
    public float rotatemean_3hrs { get; set; }
    public float pressuremean_3hrs { get; set; }
    public float vibrationmean_3hrs { get; set; }
    public float voltstd_3hrs { get; set; }
    public float rotatestd_3hrs { get; set; }
    public float pressurestd_3hrs { get; set; }
    public float vibrationstd_3hrs { get; set; }
    public float voltmean_24hrs { get; set; }
    public float rotatemean_24hrs { get; set; }
    public float pressuremean_24hrs { get; set; }
    public float vibrationmean_24hrs { get; set; }
    public float voltstd_24hrs { get; set; }
    public float rotatestd_24hrs { get; set; }
    public float pressurestd_24hrs { get; set; }
    public float vibrationstd_24hrs { get; set; }
    public float error1count { get; set; }
    public float error2count { get; set; }
    public float error3count { get; set; }
    public float error4count { get; set; }
    public float error5count { get; set; }
    public float sincelastcomp1 { get; set; }
    public float sincelastcomp2 { get; set; }
    public float sincelastcomp3 { get; set; }
    public float sincelastcomp4 { get; set; }
    public string model { get; set; }
    public float age { get; set; }
    public string failure { get; set; }
}

Now that we have defined a class type, we are going to implement the pipeline for this ml model.First, we create MLContext with constant seed, so that the model can be reproduced by any user running this notebook. Then we load the data and split the data into train and test set.

MLContext mlContext= new MLContext(seed:88888);
var strPath="data/final_dataFrame.csv";
var mlDF= DataFrame.FromCsv(strPath);
//
//split data frame on training and testing part
//split at 2015-08-01 00:00:00, to train on the first 8 months and test on last 4 months
var trainDF = mlDF.Filter("datetime", new DateTime(2015, 08, 1, 1, 0, 0), FilterOperator.LessOrEqual);
var testDF = mlDF.Filter("datetime", new DateTime(2015, 08, 1, 1, 0, 0), FilterOperator.Greather);

The summary for the training set is show in the following tables:

Similarly the testing set has the following summary:

Once we have data into application memory, we can prepare the ML.NET pipeline. The pipeline consists of data transformation from the Daany.DataFrame type into collection IDataView. For this task, the LoadFromEnumerable method is used.

//Load daany:DataFrame into ML.NET pipeline
public static IDataView loadFromDataFrame(MLContext mlContext,Daany.DataFrame df)
{
    IDataView dataView = mlContext.Data.LoadFromEnumerable(df.GetEnumerator(oRow =>
    {
        //convert row object array into PrManitenance row
        var ooRow = oRow;
        var prRow = new PrMaintenanceClass();
        prRow.datetime = (DateTime)ooRow["datetime"];
        prRow.machineID = (int)ooRow["machineID"];
        prRow.voltmean_3hrs = Convert.ToSingle(ooRow["voltmean_3hrs"]);
        prRow.rotatemean_3hrs = Convert.ToSingle(ooRow["rotatemean_3hrs"]);
        prRow.pressuremean_3hrs = Convert.ToSingle(ooRow["pressuremean_3hrs"]);
        prRow.vibrationmean_3hrs = Convert.ToSingle(ooRow["vibrationmean_3hrs"]);
        prRow.voltstd_3hrs = Convert.ToSingle(ooRow["voltsd_3hrs"]);
        prRow.rotatestd_3hrs = Convert.ToSingle(ooRow["rotatesd_3hrs"]);
        prRow.pressurestd_3hrs = Convert.ToSingle(ooRow["pressuresd_3hrs"]);
        prRow.vibrationstd_3hrs = Convert.ToSingle(ooRow["vibrationsd_3hrs"]);
        prRow.voltmean_24hrs = Convert.ToSingle(ooRow["voltmean_24hrs"]);
        prRow.rotatemean_24hrs = Convert.ToSingle(ooRow["rotatemean_24hrs"]);
        prRow.pressuremean_24hrs = Convert.ToSingle(ooRow["pressuremean_24hrs"]);
        prRow.vibrationmean_24hrs = Convert.ToSingle(ooRow["vibrationmean_24hrs"]);
        prRow.voltstd_24hrs = Convert.ToSingle(ooRow["voltsd_24hrs"]);
        prRow.rotatestd_24hrs = Convert.ToSingle(ooRow["rotatesd_24hrs"]);
        prRow.pressurestd_24hrs = Convert.ToSingle(ooRow["pressuresd_24hrs"]);
        prRow.vibrationstd_24hrs = Convert.ToSingle(ooRow["vibrationsd_24hrs"]);
        prRow.error1count = Convert.ToSingle(ooRow["error1count"]);
        prRow.error2count = Convert.ToSingle(ooRow["error2count"]);
        prRow.error3count = Convert.ToSingle(ooRow["error3count"]);
        prRow.error4count = Convert.ToSingle(ooRow["error4count"]);
        prRow.error5count = Convert.ToSingle(ooRow["error5count"]);
        prRow.sincelastcomp1 = Convert.ToSingle(ooRow["sincelastcomp1"]);
        prRow.sincelastcomp2 = Convert.ToSingle(ooRow["sincelastcomp2"]);
        prRow.sincelastcomp3 = Convert.ToSingle(ooRow["sincelastcomp3"]);
        prRow.sincelastcomp4 = Convert.ToSingle(ooRow["sincelastcomp4"]);
        prRow.model = (string)ooRow["model"];
        prRow.age = Convert.ToSingle(ooRow["age"]);
        prRow.failure = (string)ooRow["failure"];
        //
        return prRow;
    }));
            
    return dataView;
}

Load the data sets into the app memory:

//Split dataset in two parts: TrainingDataset  and TestDataset          
var trainData = loadFromDataFrame(mlContext, trainDF);
var testData = loadFromDataFrame(mlContext, testDF);

Prior to start training we need to process that data, so that we encoded all non-numerical columns into numerical columns. Also we need to define which columns are going to be part of the Featuresand which one will be label. For this reason we define PrepareData method.

public static IEstimator PrepareData(MLContext mlContext)
{
    //one hot encoding category column
    IEstimator dataPipeline =

    mlContext.Transforms.Conversion.MapValueToKey(outputColumnName: "Label", inputColumnName: nameof(PrMaintenanceClass.failure))
    //encode model column
    .Append(mlContext.Transforms.Categorical.OneHotEncoding("model",outputKind: OneHotEncodingEstimator.OutputKind.Indicator))

    //define features column
    .Append(mlContext.Transforms.Concatenate("Features",
    // 
    nameof(PrMaintenanceClass.voltmean_3hrs), nameof(PrMaintenanceClass.rotatemean_3hrs),
    nameof(PrMaintenanceClass.pressuremean_3hrs),nameof(PrMaintenanceClass.vibrationmean_3hrs),
    nameof(PrMaintenanceClass.voltstd_3hrs), nameof(PrMaintenanceClass.rotatestd_3hrs), 
    nameof(PrMaintenanceClass.pressurestd_3hrs), nameof(PrMaintenanceClass.vibrationstd_3hrs), 
    nameof(PrMaintenanceClass.voltmean_24hrs),nameof(PrMaintenanceClass.rotatemean_24hrs),
    nameof(PrMaintenanceClass.pressuremean_24hrs),nameof(PrMaintenanceClass.vibrationmean_24hrs), 
    nameof(PrMaintenanceClass.voltstd_24hrs),nameof(PrMaintenanceClass.rotatestd_24hrs),
    nameof(PrMaintenanceClass.pressurestd_24hrs),nameof(PrMaintenanceClass.vibrationstd_24hrs), 
    nameof(PrMaintenanceClass.error1count), nameof(PrMaintenanceClass.error2count),
    nameof(PrMaintenanceClass.error3count), nameof(PrMaintenanceClass.error4count), 
    nameof(PrMaintenanceClass.error5count), nameof(PrMaintenanceClass.sincelastcomp1),
    nameof(PrMaintenanceClass.sincelastcomp2),nameof(PrMaintenanceClass.sincelastcomp3),
    nameof(PrMaintenanceClass.sincelastcomp4),nameof(PrMaintenanceClass.model), nameof(PrMaintenanceClass.age) ));

    return dataPipeline;
}

As can be seen, the method converts the label column failure which is a simple textual column into categorical columns containing numerical representation for each different category called Keys.

Now that we have finished with data transformation, we are going to define the Train method which is going to implement ML algorithm, hyper-parameters for it and training process. Once we call this method the method will return the trained model.

//train method
static public TransformerChain Train(MLContext mlContext, IDataView preparedData)
{
    var transformationPipeline=PrepareData(mlContext);
    //settings hyper parameters
    var options = new LightGbmMulticlassTrainer.Options();
    options.FeatureColumnName = "Features";
    options.LearningRate = 0.005;
    options.NumberOfLeaves = 70;
    options.NumberOfIterations = 2000;
    options.NumberOfLeaves = 50;
    options.UnbalancedSets = true;
    //
    var boost = new DartBooster.Options();
    boost.XgboostDartMode = true;
    boost.MaximumTreeDepth = 25;
    options.Booster = boost;
    
    // Define LightGbm algorithm estimator
    IEstimator lightGbm = mlContext.MulticlassClassification.Trainers.LightGbm(options);

    //train the ML model
    TransformerChain model = transformationPipeline.Append(lightGbm).Fit(preparedData);

    //return trained model for evaluation
    return model;
}

Training process and model evaluation

Since we have all required methods, the main program structure looks like:

//prepare data transformation pipeline
var dataPipeline = PrepareData(mlContext);

//print prepared data
var pp = dataPipeline.Fit(trainData);
var transformedData = pp.Transform(trainData);

//train the model
var model = Train(mlContext, trainData);

Once the Train method returns the model, the evaluation phase started. In order to evaluate model, we perform full evaluation with training and testing data.

Model Evaluation with train data set

The evaluation of the model will be performed for training and testing data sets:

//evaluate train set
var predictions = model.Transform(trainData);
var metricsTrain = mlContext.MulticlassClassification.Evaluate(predictions);

ConsoleHelper.PrintMultiClassClassificationMetrics("TRAIN DataSet", metricsTrain);
ConsoleHelper.ConsoleWriteHeader("Train DataSet Confusion Matrix ");
ConsoleHelper.ConsolePrintConfusionMatrix(metricsTrain.ConfusionMatrix);

The model evaluation output:

************************************************************
*    Metrics for TRAIN DataSet multi-class classification model   
*-----------------------------------------------------------
    AccuracyMacro = 0.9603, a value between 0 and 1, the closer to 1, the better
    AccuracyMicro = 0.999, a value between 0 and 1, the closer to 1, the better
    LogLoss = 0.0015, the closer to 0, the better
    LogLoss for class 1 = 0, the closer to 0, the better
    LogLoss for class 2 = 0.088, the closer to 0, the better
    LogLoss for class 3 = 0.0606, the closer to 0, the better
************************************************************
 
Train DataSet Confusion Matrix 
###############################
 

Confusion table
          ||========================================
PREDICTED ||  none | comp4 | comp1 | comp2 | comp3 | Recall
TRUTH     ||========================================
     none || 165 371 |     0 |     0 |     0 |     0 | 1.0000
    comp4 ||     0 |   772 |    16 |    25 |    11 | 0.9369
    comp1 ||     0 |     8 |   884 |    26 |     4 | 0.9588
    comp2 ||     0 |    31 |    22 | 1 097 |     8 | 0.9473
    comp3 ||     0 |    13 |     4 |     8 |   576 | 0.9584
          ||========================================
Precision ||1.0000 |0.9369 |0.9546 |0.9490 |0.9616 |

As can be seen the model predict the values correctly in most cases in the train data set. Now lets see how the model predict the data which have not been part of the raining process.

Model evaluation with test data set

//evaluate test set
var testPrediction = model.Transform(testData);
var metricsTest = mlContext.MulticlassClassification.Evaluate(testPrediction);
ConsoleHelper.PrintMultiClassClassificationMetrics("Test Dataset", metricsTest);

ConsoleHelper.ConsoleWriteHeader("Test DataSet Confusion Matrix ");
ConsoleHelper.ConsolePrintConfusionMatrix(metricsTest.ConfusionMatrix);
************************************************************
*    Metrics for Test Dataset multi-class classification model   
*-----------------------------------------------------------
    AccuracyMacro = 0.9505, a value between 0 and 1, the closer to 1, the better
    AccuracyMicro = 0.9986, a value between 0 and 1, the closer to 1, the better
    LogLoss = 0.0033, the closer to 0, the better
    LogLoss for class 1 = 0.0012, the closer to 0, the better
    LogLoss for class 2 = 0.1075, the closer to 0, the better
    LogLoss for class 3 = 0.1886, the closer to 0, the better
************************************************************
 
Test DataSet Confusion Matrix 
##############################
 

Confusion table
          ||========================================
PREDICTED ||  none | comp4 | comp1 | comp2 | comp3 | Recall
TRUTH     ||========================================
     none || 120 313 |     6 |    15 |     0 |     0 | 0.9998
    comp4 ||     1 |   552 |    10 |    17 |     4 | 0.9452
    comp1 ||     2 |    14 |   464 |    24 |    24 | 0.8788
    comp2 ||     0 |    39 |     0 |   835 |    16 | 0.9382
    comp3 ||     0 |     4 |     0 |     0 |   412 | 0.9904
          ||========================================
Precision ||1.0000 |0.8976 |0.9489 |0.9532 |0.9035 |

We can see, that the model has overall accuracy 99%, and 95% average per class accuracy. The complete nptebook of this blog post can be found here.

Your first data analysis with .NET Jupyter Notebook and Daany.DataFrame


Note: The .NET Jupyter notebook for this blog post can be found here.

The Structure of Daany.DataFrame

The main part of Daany project is Daany.DataFrame – an c# implementation of a data frame. A data frame is a software component used for handling tabular data, especially for data preparation, feature engineering, and analysis during the development of machine learning models. The concept of Daany.DataFrame implementation is based on simplicity and .NET coding standard. It represents tabular data consisting of columns and rows. Each column has name and type and each row has its index and label.

Usually, rows indicate a zero axis, while columns indicate axis one.

The following image shows a data frame structure:

data frame structure

The basic components of the data frame are:

  • header – list of column names,
  • index – list of object representing each row,
  • data – list of values in the data frame,
  • missing value – data with no values in data frame.

The image above shows the data frame components visually, and how they are positioned in the data frame.

Create Data Frame from a text based file

The data we used are stored in files, and they must be load into application memory in order to be analyzed and transformed. Loading data from files by using Daany.DataFrame is as easy as calling one method.

By using static method DataFrame.FromCsv a user can create data frame object from the csv file. Otherwise, data frame can be persisted on disk by calling static method DataFrame.ToCsv.

The following code shows how to use static methods ToCsv and FromCsv to show persisting and loading data to data frame:

string filename = "df_file.txt";
//define a dictionary of data
var dict = new Dictionary<string, List>
{
    { "ID",new List() { 1,2,3} },
    { "City",new List() { "Sarajevo", "Seattle", "Berlin" } },
    { "Zip Code",new List() { 71000,98101,10115 } },
    { "State",new List() {"BiH","USA","GER" } },
    { "IsHome",new List() { true, false, false} },
    { "Values",new List() { 3.14, 3.21, 4.55 } },
    { "Date",new List() { DateTime.Now.AddDays(-20) , DateTime.Now.AddDays(-10) , DateTime.Now.AddDays(-5) } },

};

//create data frame with 3 rows and 7 columns
var df = new DataFrame(dict);

//first Save data frame on disk and load it
DataFrame.ToCsv(filename, df);

//create data frame with 3 rows and 7 columns
var dfFromFile = DataFrame.FromCsv(filename, sep:',');

//show dataframe
dfFromFile

First, we created a data frame from the dictionary collection. Then we store the data frame to file. After successfully saving, we load the same data frame from the CSV file. The end of the code snippet put asserts in order to prove everything is correctly implemented. The output of the code cell is:

data frame structure

In case the performance is important, you should pass column types to the FromCSV method in order to achieve up to 50% of loading time. For example the following code loads the data from the file, by passing predefined column types:

//defined types of the column 
var colTypes1 = new ColType[] { ColType.I32, ColType.IN, ColType.I32, ColType.STR, ColType.I2, ColType.F32, ColType.DT };
//create data frame with 3 rows and 7 columns
var dfFromFile = DataFrame.FromCsv(filename, sep: ',', colTypes: colTypes1);

And we got the same result: data frame structure

Loading Real Data from the Web

Data can be loaded directly from the web storage by using FromWebstatic method. The following code shows how to load the Concrete Slump Test data from the web. The data set includes 103 data points. There are 7 input variables, and 3 output variables in the data set: Cement, Slag, Fly ash, Water, SP, Coarse Aggr.,Fine Aggr., SLUMP (cm), FLOW (cm), Strength (Mpa). The following code load the Concrete Slump Test data set into Daany DataFrame:

//define web url where the data is stored
var url = "https://archive.ics.uci.edu/ml/machine-learning-databases/concrete/slump/slump_test.data";
//
var df = DataFrame.FromWeb(url);
df.Head(5)

data frame structure

Once we have the data into the application memory, we can perform some statistical calculations. First, let’s see the structure of the data by calling the Describe method:

df.Describe(false)

data frame structure

Now, we see we have a data frame with 103 rows and all columns are of numerical type. The frequency of the data indicated that values are mostly not repeated. From the maximum and minimum values, we can see the data have no outlines since distributions of the values are tends to be normal.

Data Visualization

Let’s perform some visualization just to see how visually data look like. As first let’s see the Slump distribution with respect of SP and Fly ash:

var chart = Chart.Plot(
    new Graph.Scatter()
    {
        x = df["SP"],
        y = df["Fly ash"],
        mode = "markers",
        marker = new Graph.Marker()
        {
            color = df["SLUMP(cm)"].Select(x=>x),
            colorscale = "Jet"
        }
    }
);

var layout = new Layout.Layout(){title="Slump vs. Cement and Slag"};
chart.WithLayout(layout);
chart.WithXTitle("Cement");
chart.WithYTitle("Slag");

display(chart);

data frame structure

From the chart above, we cannot see any relation between those two columns. Let’s see the chart made between Slump and Flow:

var chart = Chart.Plot(
    new Graph.Scatter()
    {
        x = df["SLUMP(cm)"],
        y = df["FLOW(cm)"],
        mode = "markers",
    }
);

var layout = new Layout.Layout(){title="Slump vs. Cement and Slag"};
chart.WithLayout(layout);
chart.WithLegend(true);
chart.WithXTitle("Slump");
chart.WithYTitle("Flow");

display(chart);

data frame structure

We can see some relation in the chart and the relation is positive. This means as Slupm is growing, Flow value grows as well. If we want to measure the relation between the columns we can do that with the following code:

var x1= df["SLUMP(cm)"].Select(x=>Convert.ToDouble(x)).ToArray();
var x2= df["FLOW(cm)"].Select(x=>Convert.ToDouble(x)).ToArray();

//The Pearson coefficient is calculated by
var r=x1.R(x2);
r

The correlation is 0.90 which indicates a strong relationship between those two columns.

The complete .NET Jupyter Notebook for this blog post can be found here

C# Jupyter Notebook Part 2/n


What is .NET Jupyter Notebook

In this blog post, we are going to explore the main features in the new C# Juypter Notebook. For those who used Notebook from other programming languages like Python or R, this would be an easy task. First of all, the Notebook concept provides a quick, simple and straightforward way to present a mix of text and $ \Latex $, source code implementation and its output. This means you have a full-featured platform to write a paper or blog post, presentation slides, lecture notes, and other educated materials.

The notebook consists of cells, where a user can write code or markdown text. Once he completes the cell content confirmation for cell editing can be achieved by Ctrl+Enter or by press run button from the notebook toolbar. The image below shows the notebook toolbar, with a run button. The popup combo box shows the type of cell the user can define. In the case of text, Markdown should be selected, for writing source code the cell should be Code.

run button

To start writing code to C# Notebook, the first thing we should do is install NuGet packages or add assembly references and define using statements. In order to do that, the following code installs several nuget packages, and declare several using statements. But before writing code, we should add a new cell by pressing + toolbar button.

The first few NuGet packages are packages for ML.NET. Then we install the XPlot package for data visualization in .NET Notebook, and then we install a set of Daany packages for data analytics. First, we install Daany.DataFrame for data exploration and analysis, and then Daany.DataFrame.Ext set of extensions for data manipulation used with ML.NET.

//ML.NET Packages
#r "nuget:Microsoft.ML.LightGBM"
#r "nuget:Microsoft.ML"
#r "nuget:Microsoft.ML.DataView"

//Install XPlot package
#r "nuget:XPlot.Plotly"

//Install Daany.DataFrame 
#r "nuget:Daany.DataFrame"
#r "nuget:Daany.DataFrame.Ext"
using System;
using System.Linq;

//Daany data frame
using Daany;
using Daany.Ext;

//Plotting functionalities
using XPlot.Plotly;

//ML.NET using
using Microsoft.ML;
using Microsoft.ML.Data;
using Microsoft.ML.Trainers.LightGbm;

The output for the above code:

run button

Once the NuGet packages are installed successfully, we can start with data exploration. But before this declare few using statements:

We can define classes or methods globally. The following code implements the formatter method for displaying Daany.DataFrame in the output cell.

// Temporal DataFrame formatter for this early preview
using Microsoft.AspNetCore.Html;
Formatter<DataFrame>.Register((df, writer) =>
{
    var headers = new List<IHtmlContent>();
    headers.Add(th(i("index")));
    headers.AddRange(df.Columns.Select(c => (IHtmlContent) th(c)));
    
    //renders the rows
    var rows = new List<List<IHtmlContent>>();
    var take = 20;
    
    //
    for (var i = 0; i < Math.Min(take, df.RowCount()); i++)
    {
        var cells = new List<IHtmlContent>();
        cells.Add(td(df.Index[i]));
        foreach (var obj in df[i])
        {
            cells.Add(td(obj));
        }
        rows.Add(cells);
    }
    
    var t = table(
        thead(
            headers),
        tbody(
            rows.Select(
                r => tr(r))));
    
    writer.Write(t);
}, "text/html");

For this demo we will used famous Iris data set. We will download the file from the internet, load it by using Daany.DataFrame, a display few first rows. In order to do that we run the folloing code:

var url = "https://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data";
var cols = new string[] {"sepal_length","sepal_width", "petal_length", "petal_width", "flower_type"};
var df = DataFrame.FromWeb(url, sep:',',names:cols);
df.Head(5)

The output looks like this: run button

As can be seen, the last line from the previous code has no semicolon, which means the line should be displayed in the output cell. Let’s move on, and implement two new columns. The new columns will be sepal and petal area for the flower. The expression we are going to use is:

$$ PetalArea = petal_width \cdot petal_length;\ SepalArea = sepal_width \cdot sepal_length; $$

As can be seen, the $\LaTeX$ is fully supported in the notebook.

The above formulea is implemented in the following code:

//calculate two new columns into dataset
df.AddCalculatedColumn("SepalArea", (r, i) => Convert.ToSingle(r["sepal_width"]) * Convert.ToSingle(r["sepal_length"]));
df.AddCalculatedColumn("PetalArea", (r, i) => Convert.ToSingle(r["petal_width"]) * Convert.ToSingle(r["petal_length"]));
df.Head(5)

run button

The data frame has two new columns. They indicate the area for the flower. In order to see basic statistics parameters for each of the defined columns, we call Describe method.

//see descriptive stats of the final ds
df.Describe(false)

run button

From the table above, we can see the flower column has only 3 values. The most frequent value has a frequency equal to 50, which is an indicator of a balanced dataset.

Data visualization

The most powerful feature in Notebook is a data visualization. In this section, we are going to plot some interesting charts.

In order to see how sepal and petal areas are spread in 2D plane, the following plot is implemented:

//plot the data in order to see how areas are spread in the 2d plane
//XPlot Histogram reference: http://tpetricek.github.io/XPlot/reference/xplot-plotly-graph-histogram.html

var faresHistogram = Chart.Plot(new Graph.Histogram(){x = df["flower_type"], autobinx = false, nbinsx = 20});
var layout = new Layout.Layout(){title="Distribution of iris flower"};
faresHistogram.WithLayout(layout);
display(faresHistogram);

run button

The chart is also an indication of a balanced dataset.

Now lets plot areas depending on the flower type:

// Plot Sepal vs. Petal area with flower type

var chart = Chart.Plot(
    new Graph.Scatter()
    {
        x = df["SepalArea"],
        y = df["PetalArea"],
        mode = "markers",
        marker = new Graph.Marker()
        {
            color = df["flower_type"].Select(x=>x.ToString()=="Iris-virginica"?1:(x.ToString()=="Iris-versicolor"?2:3)),
            colorscale = "Jet"
        }
    }
);

var layout = new Layout.Layout(){title="Plot Sepal vs. Petal Area & color scale on flower type"};
chart.WithLayout(layout);
chart.WithLegend(true);
chart.WithLabels(new string[3]{"Iris-virginica","Iris-versicolor", "Iris-setosa"});
chart.WithXTitle("Sepal Area");
chart.WithYTitle("Petal Area");
chart.Width = 800;
chart.Height = 400;

display(chart);

run button

As can be seen from the chart above, flower types are separated almost linearly, since we used petal and sepal areas instead of width and length. With this transformation, we can get a 100% accurate ml model.

Machine Learning

Once we finished with data transformation and visualization we can define the final data frame before machine learning application. To end this we are going to select only two columns for features and one label column which will be flower type.

//create new data-frame by selecting only three columns
var derivedDF = df["SepalArea","PetalArea","flower_type"];
derivedDF.Head(5)

run button

Since we are going to use ML.NET, we need to declare Iris in order to load the data into ML.NET.

//Define an Iris class for machine learning.
class Iris
{
    public float PetalArea { get; set; }
    public float SepalArea { get; set; }
    public string Species { get; set; }
}
//Create ML COntext
MLContext mlContext = new MLContext(seed:2019);

Then load the data from Daany data frame into ML.NET:

//Load Data Frame into Ml.NET data pipeline
IDataView dataView = mlContext.Data.LoadFromEnumerable<Iris>(derivedDF.GetEnumerator<Iris>((oRow) =>
{
    //convert row object array into Iris row

    var prRow = new Iris();
    prRow.SepalArea = Convert.ToSingle(oRow["SepalArea"]);
    prRow.PetalArea = Convert.ToSingle(oRow["PetalArea"]);
    prRow.Species = Convert.ToString(oRow["flower_type"]);
    //
    return prRow;
}));

Once we have data, we can split it into train and test sets:

//Split dataset in two parts: TrainingDataset (80%) and TestDataset (20%)
var trainTestData = mlContext.Data.TrainTestSplit(dataView, testFraction: 0.2);
var trainData = trainTestData.TrainSet;
var testData = trainTestData.TestSet;

The next step in prepare the data for training is define pipeline for dtaa transformation and feature engineering:

//one encoding output category column by defining KeyValues for each category
IEstimator<ITransformer> dataPipeline =
mlContext.Transforms.Conversion.MapValueToKey(outputColumnName: "Label", inputColumnName: nameof(Iris.Species))

//define features columns
.Append(mlContext.Transforms.Concatenate("Features",nameof(Iris.SepalArea), nameof(Iris.PetalArea)));

Once we completes the preparation paert, we can perform the training part. The training start by calling Fit to the pipeline:

%%time
 // Define LightGbm algorithm estimator
IEstimator<ITransformer> lightGbm = mlContext.MulticlassClassification.Trainers.LightGbm();
//train the ML model
TransformerChain<ITransformer> model = dataPipeline.Append(lightGbm).Fit(trainData);

Once the training is completes, we have trained model which can be evaluated. In order to print the evaluation result with formatting, we are going to install Daany DataFrame extension which has implementation of printing results


//evaluate train set
var predictions = model.Transform(trainData);
var metricsTrain = mlContext.MulticlassClassification.Evaluate(predictions);
ConsoleHelper.PrintMultiClassClassificationMetrics("TRAIN Iris DataSet", metricsTrain);
ConsoleHelper.ConsoleWriteHeader("Train Iris DataSet Confusion Matrix ");
ConsoleHelper.ConsolePrintConfusionMatrix(metricsTrain.ConfusionMatrix);

run button

//evaluate test set
var testPrediction = model.Transform(testData);
var metricsTest = mlContext.MulticlassClassification.Evaluate(testPrediction);
ConsoleHelper.PrintMultiClassClassificationMetrics("TEST Iris Dataset", metricsTest);
ConsoleHelper.ConsoleWriteHeader("Test Iris DataSet Confusion Matrix ");
ConsoleHelper.ConsolePrintConfusionMatrix(metricsTest.ConfusionMatrix);

run button

As can be seen, we have a 100% accurate model for Iris flower recognition. Now, let’s add a new column into the data frame called Prediction to have a model prediction in the data frame.
In order to do that, we are evaluating the model on the train and the test data set. Once we have a prediction for both sets, we can join them and add as a separate column in Daany data frame. The following code does exactly what we described previously.

var flowerLabels = DataFrameExt.GetLabels(predictions.Schema).ToList();
var p1 = predictions.GetColumn<uint>("PredictedLabel").Select(x=>(int)x).ToList();
var p2 = testPrediction.GetColumn<uint>("PredictedLabel").Select(x => (int)x).ToList();
//join train and test
p1.AddRange(p2);
var p = p1.Select(x => (object)flowerLabels[x-1]).ToList();
//add new column into df
var dic = new Dictionary<string, List<object>> { { "PredictedLabel", p } };
var dff = derivedDF.AddColumns(dic);
dff.Head()

run button The output above shows the first few rows in the data frame. To see the few last rows from the data frame we call a Tail method.

dff.Tail()

run button

In this blog post, we saw how can we be more productive when using .NET Jupyter Notebook with Machine Learning and Data Exploration and transformation, by using ML.NET and Daany – DAtaANalYtics library. Complete source code for this notebook can be found at GitHub repo: https://github.com/bhrnjica/notebooks/blob/master/net_jupyter_notebook_part2.ipynb