MagmaSharp – .NET High Level API for MAGMA


Introduction

Few weeks ago, I was doing research and I needed a fast program for Singular Value Decomposition. I have SVD implementation in my open source project called Daany which is using the SVD implementation of Accord.NET great Machine Learning Framework. However, the decomposition is working fine and smooth for small matrices with few hundreds rows/cols but for matrices with more than 500 rows and columns it is pretty slow. So I was forced to think about of using different library in order to speed up the SVD calculation. I could use some of python libraries eg. TensorFlow, PyTorch or SciPy or similar libraries from R and c++. I have used such libraries and I know how they are fast. But I still wanted to have approximately same speed on .NET as well.

Then I decided to look how can I use some of available c++ based libraries. Once I switch to c++ based project I would not be able to use .NET framework where other parts of my research are implemented. So only solution was to implement a wrapper around a c++ library and use pInvoke in order to expose required methods in C# code.

The first idea was to use LAPACK/BLAS numerical library to calculate not only SVD but whole set of Linear Algebra routines. LAPACK/BLAS libraries have long history back to 70s of the 20th century. They are proved to be very fast and reliable. However they are not supported for GPU.

Then I came to MAGMA which is nothing but LAPACK for GPU. MAGMA is very complex and fast library which requires CUDA. However if the machine has no CUDA, the library cannot be used.

The I decided to use hybrid approach and use MAGMA whenever the machine has CUDA, otherwise use LAPACK as computation engine. This approach is the most complex and required advance skills in C++ and C#. So after a more than a month of the implementation the MagmaSharp is published as GitHub open source project with the fist public release MagmaSharp 0.02.01 at Nuget.org.

MagmaSharp v0.02.01

The first release of MagmaSharp supports MAGMA Device routines: Currently the library supports MAGMA driver routines for general rectangular matrix:

  1. gesv – solve linear system, AX = B, A is general non-symetric matrix,
  2. gels – least square solve, AX = B, A is rectangular,
  3. geev – eigen value solver for non-symetric matrix, AX = X \lambda
  4. gesvd– singular value decomposition (SVD), A = U \sigma V^T .

The library supports float and double value types.

Software requirements

The project is build on .NET Core 3.1 and .NET Standard 2.1. It is built and tested on Windows 10 1909 only.

Software (Native Libraries) requirements

In order to compile, build and use the library the following native libraries are needed to be installed.

However, if you install the MagmaSharp as Nuget package, both libraries are included, so you don’t have to install it.

How to use MagmaSharp

MagmaSharp is packed as Nuget and can be added to your .NET project as ordinary .NET component. You don’t have to worry about native libraries and dependencies. Everything is included in the package. The package can be installed from this link, or just search for MagmaSharp.

How to Build MagmaSharp from the source

  1. Download the MagmaSharp source code from the GitHub page.

  2. Reference Magma static library and put it to folder MagmaLib. Magma static library can be downloaded and built from the Official site.

  3. Open ‘MagmaSharp.sln’ with Visual Studio 2019.

  4. Make sure the building architecture is x64.

  5. Restore Nuget packages.

  6. Build and run the Solution.

How to start with MagmaSharp

The best way to start with MahmaSharp is to take a look at the MagmaSharp.XUnit project, there is a small example how to use each of the implemented method with or without CUDA device.

Advertisement

Building Predictive Maintenance Model Using ML.NET


Summary

This C# notebook is a continuation from the previous blog post Predictive Maintenance on .NET Platform.

The notebook is completely implemented on .NET platform using C# Jupyter Notebook and Daany – C# data analytics library. There are small differences between this notebook and the notebooks at the official azure gallery portal, but in most cases, the code follows the steps defined there.

The notebook shows how to use .NET Jupyter Notebook with Daany.DataFrame and ML.NET in order to prepare the data and build the Predictive Maintenance Model on .NET platform.

Description

In the previous post, we analyzed 5 data sets with information about telemetry, data, errors and maintenance as well as failure for 100 machines. The data were transformed and analyzed in order to create the final data set for building a machine learning model for Predictive maintenance.

Once we created all features from the data sets, as a final step we created the label column so that it describes if a certain machine will fail in the next 24 hours due to failure a component1, component2, component3, component4 or it will continue to work. . In this part, we are going to perform a part of the machine learning task and start training a machine learning model for predicting if a certain machine will fail in the next 24 hours due to failure, or it will be in functioning normal in that time period.

The model which we are going to build is multi-class classification model sice it has 5 values to predict:

  • component1,
  • component2,
  • component3,
  • component4 or
  • none – means it will continue to work.

ML.NET framework as library for training

In order to train the model, we are going to use ML.NET – Microsoft open source framework for Machine Learning on .NET Platform. First we need to put some preparation codes like:

  • Required Nuget packages,
  • Set of using statements and code for formatting the output:

At the beggining of this notebook, we installed the several NugetPackages in order to complete this notebook. The following code shows using statements, and method for formatting the data from the DataFrame.

//using Microsoft.ML.Data;
using XPlot.Plotly;
using System;
using System.Collections.Generic;
using System.Drawing;
using System.Linq;

//
using Microsoft.ML;
using Microsoft.ML;
using Microsoft.ML.Data;
using Microsoft.ML.Transforms;
using Microsoft.ML.Trainers.LightGbm;
//
using Daany;
using Daany.Ext;
//DataFrame formatter
using Microsoft.AspNetCore.Html;
Formatter.Register((df, writer) =>
{
    var headers = new List();
    headers.Add(th(i("index")));
    headers.AddRange(df.Columns.Select(c => (IHtmlContent) th(c)));
    //renders the rows
    var rows = new List<List>();
    var take = 20;
    //
    for (var i = 0; i < Math.Min(take, df.RowCount()); i++)
    {
        var cells = new List();
        cells.Add(td(df.Index[i]));
        foreach (var obj in df[i]){
            cells.Add(td(obj));
        }
        rows.Add(cells);
    }
    var t = table(
        thead(
            headers),
        tbody(
            rows.Select(
                r => tr(r)))); 
    writer.Write(t);
}, "text/html");

Once we install the Nuget packages and define using statements we are going to define a class we need to create an ML.NET pipeline.

The class PrMaintenanceClass – contains the features (properties) we build in the previous post. We need them to define features in the ML.NET pipeline. The second class we defined is PrMaintenancePrediction we used for prediction and model evaluation.

class PrMaintenancePrediction
{
    [ColumnName("PredictedLabel")]
    public string failure { get; set; }
}
class PrMaintenanceClass
{
    public DateTime datetime { get; set; }
    public int machineID { get; set; }
    public float voltmean_3hrs { get; set; }
    public float rotatemean_3hrs { get; set; }
    public float pressuremean_3hrs { get; set; }
    public float vibrationmean_3hrs { get; set; }
    public float voltstd_3hrs { get; set; }
    public float rotatestd_3hrs { get; set; }
    public float pressurestd_3hrs { get; set; }
    public float vibrationstd_3hrs { get; set; }
    public float voltmean_24hrs { get; set; }
    public float rotatemean_24hrs { get; set; }
    public float pressuremean_24hrs { get; set; }
    public float vibrationmean_24hrs { get; set; }
    public float voltstd_24hrs { get; set; }
    public float rotatestd_24hrs { get; set; }
    public float pressurestd_24hrs { get; set; }
    public float vibrationstd_24hrs { get; set; }
    public float error1count { get; set; }
    public float error2count { get; set; }
    public float error3count { get; set; }
    public float error4count { get; set; }
    public float error5count { get; set; }
    public float sincelastcomp1 { get; set; }
    public float sincelastcomp2 { get; set; }
    public float sincelastcomp3 { get; set; }
    public float sincelastcomp4 { get; set; }
    public string model { get; set; }
    public float age { get; set; }
    public string failure { get; set; }
}

Now that we have defined a class type, we are going to implement the pipeline for this ml model.First, we create MLContext with constant seed, so that the model can be reproduced by any user running this notebook. Then we load the data and split the data into train and test set.

MLContext mlContext= new MLContext(seed:88888);
var strPath="data/final_dataFrame.csv";
var mlDF= DataFrame.FromCsv(strPath);
//
//split data frame on training and testing part
//split at 2015-08-01 00:00:00, to train on the first 8 months and test on last 4 months
var trainDF = mlDF.Filter("datetime", new DateTime(2015, 08, 1, 1, 0, 0), FilterOperator.LessOrEqual);
var testDF = mlDF.Filter("datetime", new DateTime(2015, 08, 1, 1, 0, 0), FilterOperator.Greather);

The summary for the training set is show in the following tables:

Similarly the testing set has the following summary:

Once we have data into application memory, we can prepare the ML.NET pipeline. The pipeline consists of data transformation from the Daany.DataFrame type into collection IDataView. For this task, the LoadFromEnumerable method is used.

//Load daany:DataFrame into ML.NET pipeline
public static IDataView loadFromDataFrame(MLContext mlContext,Daany.DataFrame df)
{
    IDataView dataView = mlContext.Data.LoadFromEnumerable(df.GetEnumerator(oRow =>
    {
        //convert row object array into PrManitenance row
        var ooRow = oRow;
        var prRow = new PrMaintenanceClass();
        prRow.datetime = (DateTime)ooRow["datetime"];
        prRow.machineID = (int)ooRow["machineID"];
        prRow.voltmean_3hrs = Convert.ToSingle(ooRow["voltmean_3hrs"]);
        prRow.rotatemean_3hrs = Convert.ToSingle(ooRow["rotatemean_3hrs"]);
        prRow.pressuremean_3hrs = Convert.ToSingle(ooRow["pressuremean_3hrs"]);
        prRow.vibrationmean_3hrs = Convert.ToSingle(ooRow["vibrationmean_3hrs"]);
        prRow.voltstd_3hrs = Convert.ToSingle(ooRow["voltsd_3hrs"]);
        prRow.rotatestd_3hrs = Convert.ToSingle(ooRow["rotatesd_3hrs"]);
        prRow.pressurestd_3hrs = Convert.ToSingle(ooRow["pressuresd_3hrs"]);
        prRow.vibrationstd_3hrs = Convert.ToSingle(ooRow["vibrationsd_3hrs"]);
        prRow.voltmean_24hrs = Convert.ToSingle(ooRow["voltmean_24hrs"]);
        prRow.rotatemean_24hrs = Convert.ToSingle(ooRow["rotatemean_24hrs"]);
        prRow.pressuremean_24hrs = Convert.ToSingle(ooRow["pressuremean_24hrs"]);
        prRow.vibrationmean_24hrs = Convert.ToSingle(ooRow["vibrationmean_24hrs"]);
        prRow.voltstd_24hrs = Convert.ToSingle(ooRow["voltsd_24hrs"]);
        prRow.rotatestd_24hrs = Convert.ToSingle(ooRow["rotatesd_24hrs"]);
        prRow.pressurestd_24hrs = Convert.ToSingle(ooRow["pressuresd_24hrs"]);
        prRow.vibrationstd_24hrs = Convert.ToSingle(ooRow["vibrationsd_24hrs"]);
        prRow.error1count = Convert.ToSingle(ooRow["error1count"]);
        prRow.error2count = Convert.ToSingle(ooRow["error2count"]);
        prRow.error3count = Convert.ToSingle(ooRow["error3count"]);
        prRow.error4count = Convert.ToSingle(ooRow["error4count"]);
        prRow.error5count = Convert.ToSingle(ooRow["error5count"]);
        prRow.sincelastcomp1 = Convert.ToSingle(ooRow["sincelastcomp1"]);
        prRow.sincelastcomp2 = Convert.ToSingle(ooRow["sincelastcomp2"]);
        prRow.sincelastcomp3 = Convert.ToSingle(ooRow["sincelastcomp3"]);
        prRow.sincelastcomp4 = Convert.ToSingle(ooRow["sincelastcomp4"]);
        prRow.model = (string)ooRow["model"];
        prRow.age = Convert.ToSingle(ooRow["age"]);
        prRow.failure = (string)ooRow["failure"];
        //
        return prRow;
    }));
            
    return dataView;
}

Load the data sets into the app memory:

//Split dataset in two parts: TrainingDataset  and TestDataset          
var trainData = loadFromDataFrame(mlContext, trainDF);
var testData = loadFromDataFrame(mlContext, testDF);

Prior to start training we need to process that data, so that we encoded all non-numerical columns into numerical columns. Also we need to define which columns are going to be part of the Featuresand which one will be label. For this reason we define PrepareData method.

public static IEstimator PrepareData(MLContext mlContext)
{
    //one hot encoding category column
    IEstimator dataPipeline =

    mlContext.Transforms.Conversion.MapValueToKey(outputColumnName: "Label", inputColumnName: nameof(PrMaintenanceClass.failure))
    //encode model column
    .Append(mlContext.Transforms.Categorical.OneHotEncoding("model",outputKind: OneHotEncodingEstimator.OutputKind.Indicator))

    //define features column
    .Append(mlContext.Transforms.Concatenate("Features",
    // 
    nameof(PrMaintenanceClass.voltmean_3hrs), nameof(PrMaintenanceClass.rotatemean_3hrs),
    nameof(PrMaintenanceClass.pressuremean_3hrs),nameof(PrMaintenanceClass.vibrationmean_3hrs),
    nameof(PrMaintenanceClass.voltstd_3hrs), nameof(PrMaintenanceClass.rotatestd_3hrs), 
    nameof(PrMaintenanceClass.pressurestd_3hrs), nameof(PrMaintenanceClass.vibrationstd_3hrs), 
    nameof(PrMaintenanceClass.voltmean_24hrs),nameof(PrMaintenanceClass.rotatemean_24hrs),
    nameof(PrMaintenanceClass.pressuremean_24hrs),nameof(PrMaintenanceClass.vibrationmean_24hrs), 
    nameof(PrMaintenanceClass.voltstd_24hrs),nameof(PrMaintenanceClass.rotatestd_24hrs),
    nameof(PrMaintenanceClass.pressurestd_24hrs),nameof(PrMaintenanceClass.vibrationstd_24hrs), 
    nameof(PrMaintenanceClass.error1count), nameof(PrMaintenanceClass.error2count),
    nameof(PrMaintenanceClass.error3count), nameof(PrMaintenanceClass.error4count), 
    nameof(PrMaintenanceClass.error5count), nameof(PrMaintenanceClass.sincelastcomp1),
    nameof(PrMaintenanceClass.sincelastcomp2),nameof(PrMaintenanceClass.sincelastcomp3),
    nameof(PrMaintenanceClass.sincelastcomp4),nameof(PrMaintenanceClass.model), nameof(PrMaintenanceClass.age) ));

    return dataPipeline;
}

As can be seen, the method converts the label column failure which is a simple textual column into categorical columns containing numerical representation for each different category called Keys.

Now that we have finished with data transformation, we are going to define the Train method which is going to implement ML algorithm, hyper-parameters for it and training process. Once we call this method the method will return the trained model.

//train method
static public TransformerChain Train(MLContext mlContext, IDataView preparedData)
{
    var transformationPipeline=PrepareData(mlContext);
    //settings hyper parameters
    var options = new LightGbmMulticlassTrainer.Options();
    options.FeatureColumnName = "Features";
    options.LearningRate = 0.005;
    options.NumberOfLeaves = 70;
    options.NumberOfIterations = 2000;
    options.NumberOfLeaves = 50;
    options.UnbalancedSets = true;
    //
    var boost = new DartBooster.Options();
    boost.XgboostDartMode = true;
    boost.MaximumTreeDepth = 25;
    options.Booster = boost;
    
    // Define LightGbm algorithm estimator
    IEstimator lightGbm = mlContext.MulticlassClassification.Trainers.LightGbm(options);

    //train the ML model
    TransformerChain model = transformationPipeline.Append(lightGbm).Fit(preparedData);

    //return trained model for evaluation
    return model;
}

Training process and model evaluation

Since we have all required methods, the main program structure looks like:

//prepare data transformation pipeline
var dataPipeline = PrepareData(mlContext);

//print prepared data
var pp = dataPipeline.Fit(trainData);
var transformedData = pp.Transform(trainData);

//train the model
var model = Train(mlContext, trainData);

Once the Train method returns the model, the evaluation phase started. In order to evaluate model, we perform full evaluation with training and testing data.

Model Evaluation with train data set

The evaluation of the model will be performed for training and testing data sets:

//evaluate train set
var predictions = model.Transform(trainData);
var metricsTrain = mlContext.MulticlassClassification.Evaluate(predictions);

ConsoleHelper.PrintMultiClassClassificationMetrics("TRAIN DataSet", metricsTrain);
ConsoleHelper.ConsoleWriteHeader("Train DataSet Confusion Matrix ");
ConsoleHelper.ConsolePrintConfusionMatrix(metricsTrain.ConfusionMatrix);

The model evaluation output:

************************************************************
*    Metrics for TRAIN DataSet multi-class classification model   
*-----------------------------------------------------------
    AccuracyMacro = 0.9603, a value between 0 and 1, the closer to 1, the better
    AccuracyMicro = 0.999, a value between 0 and 1, the closer to 1, the better
    LogLoss = 0.0015, the closer to 0, the better
    LogLoss for class 1 = 0, the closer to 0, the better
    LogLoss for class 2 = 0.088, the closer to 0, the better
    LogLoss for class 3 = 0.0606, the closer to 0, the better
************************************************************
 
Train DataSet Confusion Matrix 
###############################
 

Confusion table
          ||========================================
PREDICTED ||  none | comp4 | comp1 | comp2 | comp3 | Recall
TRUTH     ||========================================
     none || 165 371 |     0 |     0 |     0 |     0 | 1.0000
    comp4 ||     0 |   772 |    16 |    25 |    11 | 0.9369
    comp1 ||     0 |     8 |   884 |    26 |     4 | 0.9588
    comp2 ||     0 |    31 |    22 | 1 097 |     8 | 0.9473
    comp3 ||     0 |    13 |     4 |     8 |   576 | 0.9584
          ||========================================
Precision ||1.0000 |0.9369 |0.9546 |0.9490 |0.9616 |

As can be seen the model predict the values correctly in most cases in the train data set. Now lets see how the model predict the data which have not been part of the raining process.

Model evaluation with test data set

//evaluate test set
var testPrediction = model.Transform(testData);
var metricsTest = mlContext.MulticlassClassification.Evaluate(testPrediction);
ConsoleHelper.PrintMultiClassClassificationMetrics("Test Dataset", metricsTest);

ConsoleHelper.ConsoleWriteHeader("Test DataSet Confusion Matrix ");
ConsoleHelper.ConsolePrintConfusionMatrix(metricsTest.ConfusionMatrix);
************************************************************
*    Metrics for Test Dataset multi-class classification model   
*-----------------------------------------------------------
    AccuracyMacro = 0.9505, a value between 0 and 1, the closer to 1, the better
    AccuracyMicro = 0.9986, a value between 0 and 1, the closer to 1, the better
    LogLoss = 0.0033, the closer to 0, the better
    LogLoss for class 1 = 0.0012, the closer to 0, the better
    LogLoss for class 2 = 0.1075, the closer to 0, the better
    LogLoss for class 3 = 0.1886, the closer to 0, the better
************************************************************
 
Test DataSet Confusion Matrix 
##############################
 

Confusion table
          ||========================================
PREDICTED ||  none | comp4 | comp1 | comp2 | comp3 | Recall
TRUTH     ||========================================
     none || 120 313 |     6 |    15 |     0 |     0 | 0.9998
    comp4 ||     1 |   552 |    10 |    17 |     4 | 0.9452
    comp1 ||     2 |    14 |   464 |    24 |    24 | 0.8788
    comp2 ||     0 |    39 |     0 |   835 |    16 | 0.9382
    comp3 ||     0 |     4 |     0 |     0 |   412 | 0.9904
          ||========================================
Precision ||1.0000 |0.8976 |0.9489 |0.9532 |0.9035 |

We can see, that the model has overall accuracy 99%, and 95% average per class accuracy. The complete nptebook of this blog post can be found here.

Predictive Maintenance on .NET Platform


Summary

This article is based on the Azure AI Gallery article: Predictive Maintenance Modeling Guide, which includes the data sets used in this article.

However, this notebook is completely implemented on .NET platform using:

  • C# Jupyter Notebook,- Jupyter Notebook experience with C# and .NET,
  • ML.NET – Microsoft open source framework for machine learning, and
  • DaanyDAta ANalYtics open source library for data analytics. It can be installed as Nuget package.

There are small differences between this notebook and the notebooks at the official azure gallery portal, but in most cases, the code follows the steps defined there. The purpose of this notebook is to demonstrate how to use .NET Jupyter Notebook with Daany.DataFrame and ML.NET in order to prepare the data and build the Predictive Maintenance Model on .NET platform. But first lets see what is Predictive Maintenance and why is it important.

Quick Introduction to Predictive Maintenance

Simply speaking it is a technique to determine (predict) the failure of the machine component in the near future so that the component can be replaced based on the maintenance plan before it fails and stop the production process. The Predictive maintenance can improve the production process and increase the productivity. By successfully handling with predictive maintenance we are able to achieve the following goals:

  • reduce the operational risk of mission-critical equipment

  • control cost of maintenance by enabling just-in-time maintenance operations

  • discover patterns connected to various maintenance problems

  • provide Key Performance Indicators.

The following image shows different type of maintenance in the production. Predictive Maintenance

Predictive maintenance data collection

In order to handle and use this technique we need a various data from the production, including but not limited to:

  • telemetry data from the observed machines (vibration, voltage, temperature etc)
  • errors and logs data relevant to each machine,
  • failure data, when a certain component is replaced, etc
  • quality and accuracy data, machine properties, models, age etc.

3 Steps in Predictive Maintenance

Usually, every Predictive Maintenance technique should proceed by the following 3 main steps:

  1. Collect Data – collect all possible descriptions,historical and real-time data, usually by using IOT devices, various loggers, technical documentation, etc.

  2. Predict Failures – collected data can be used and transformed into machine learning ready data sets, and build a machine learning model to predict the failures of the components in the set of machines in the production.

  3. React – by obtaining the information which components will fail in the near future, we can activate the process of replacement so the component will be replaced before it fails, and the production process will not be interrupted.

Predict Failures

In this article, the second step will be presented, which will be related to data preparation. In order to predict failures in the production process, a set of data transformations, cleaning, feature engineering, and selection must be performed to prepare the data for building a machine learning model. The data preparation part plays a crucially step in the model building since a quality data preparation will directly reflect on the model accuracy and reliability.

Software requirements

In this article, the complete procedure in data preparation is presented. The whole process is performed using:

  • .NET Core 3.1 – the latest .NET platform version,

  • .NET Jupyter Notebook– .NET implementation of popular Jupyer Notebook,

  • ML.NET – Microsoft open-source framework for Machine Learning on .NET Platform and

  • DaanyDAta ANalYtics library. It can be found at Github but also as Nuget package.

Notebook preparation

In order to complete this task, we should install several Nuget packages and include several using keywords. The following code block shows the using keywords, and additional code related to notebook output format.

Note: nuget package installation must be in the first cell of the Notebook, otherwise the notebook will not work as expected. Hope this will be changed once the final version would be released.

//using Microsoft.ML.Data;
using XPlot.Plotly;
using System;
using System.Collections.Generic;
using System.Drawing;
using System.Linq;
//using statement of Daany package
using Daany;
using Daany.MathStuff;
using Daany.Ext;
//
using Microsoft.ML;
//DataFrame formatter
using Microsoft.AspNetCore.Html;
Formatter<DataFrame>.Register((df, writer) =>
{
    var headers = new List<IHtmlContent>();
    headers.Add(th(i("index")));
    headers.AddRange(df.Columns.Select(c => (IHtmlContent) th(c)));    
    //renders the rows
    var rows = new List<List<IHtmlContent>>();
    var take = 20;  
    //
    for (var i = 0; i < Math.Min(take, df.RowCount()); i++)
    {
        var cells = new List<IHtmlContent>();
        cells.Add(td(df.Index[i]));
        foreach (var obj in df[i])
        {
            cells.Add(td(obj));
        }
        rows.Add(cells);
    }    
    var t = table(
        thead(headers),
        tbody(rows.Select(r => tr(r))));
    
    writer.Write(t);
}, "text/html");

Download the data

In order to start with data preparation, we need data. The data can be found at Azure blob storage. The data is maintained by Azure Gallery Article.

Once the data are downloaded from the blob storage, they will not be downloaded again and they will be used as local copies.

The Data

The data we are using for predictive maintenance can be classified to:

  • telemetry – which collects historical data about machine behavior (voltage, vibration, etc)
  • errors – the data about warnings and errors in the machines
  • maint – data about replacement and maintenance for the machines,
  • machines – descriptive information about the machines,
  • failures – data when a certain machine is stopped, due to component failure.

We load all the files in order to fully prepare data for the training process. The following code sample loads the data in to application memory.

%%time
//Load ALL 5 data frame files
//DataFrame Cols: datetime,machineID,volt,rotate,pressure,vibration
var telemetry = DataFrame.FromCsv("data/PdM_telemetry.csv", dformat: "yyyy-mm-dd hh:mm:ss");
var errors = DataFrame.FromCsv("data/PdM_errors.csv", dformat: "yyyy-mm-dd hh:mm:ss");
var maint = DataFrame.FromCsv("data/PdM_maint.csv", dformat: "yyyy-mm-dd hh:mm:ss");
var failures = DataFrame.FromCsv("data/PdM_failures.csv", dformat: "yyyy-mm-dd hh:mm:ss");
var machines = DataFrame.FromCsv("data/PdM_machines.csv", dformat: "yyyy-mm-dd hh:mm:ss");

Telemetry

The first data source is the telemetry data about machines. It consists of voltage, rotation, pressure, and vibration measurements measured from 100 machines in real-time hourly. The time period the data has been collected is during the year 2015. The following data shows the first 10 records in the dataset.

Predictive Maintenance

A description of the whole dataset is shown on the next cell. As can be seen, we have nearly million records for the machines, which is good starting point for the analysis.

Predictive Maintenance

In case we want to see the visualization of the telemetry data, we can select on of several column and show it.

Predictive Maintenance

Errors

One of the most important information in every Predictive Maintenance system is Error data. Actually errors are non-breaking recorded events while the machine is still operational. The error date and times are rounded to the closest hour since the telemetry data is collected at an hourly rate.

errors.Head()

Predictive Maintenance

//count number of errors 
var barValue = errors["errorID"].GroupBy(v => v)
        .OrderBy(group => group.Key)
        .Select(group => Tuple.Create(group.Key, group.Count()));

//Plot Errors data
var chart = Chart.Plot(
    new Graph.Bar()
    {
       x = barValue.Select(x=>x.Item1),
       y = barValue.Select(x=>x.Item2),
      //  mode = "markers",  
    }
    
);
var layout = new XPlot.Plotly.Layout.Layout() 
    { title = "Error distribution",
     xaxis=new XPlot.Plotly.Graph.Xaxis() { title="Error name" }, 
     yaxis = new XPlot.Plotly.Graph.Yaxis() { title = "Error Count" } };
//put layout into chart
chart.WithLayout(layout);

display(chart)

Predictive Maintenance

Maintenance

The Maintenance is the next PrM component which tells us about scheduled and unscheduled maintenance. The maintenance contains the records which correspond to both regular inspection of components as well as failures. To add the record into the maintenance table a component must be replaced during the scheduled inspection or replaced due to a breakdown. In case the records are created due to breakdowns are called failures. Maintenance contains the data from 2014 and 2015 years.

maint.Head()

Predictive Maintenance

Machines

The data include information about 100 machines which are subject of the Predictive Maintenance analysis. The information includes: model type, and machine age. Distribution of the machine age categorized by the models across production process is shown in the following image:

//Distribution of models across age
var d1 = machines.Filter("model", "model1", FilterOperator.Equal)["age"]
                                    .GroupBy(g => g).Select(g=>(g.Key,g.Count()));
var d2 = machines.Filter("model", "model2", FilterOperator.Equal)["age"]
                                    .GroupBy(g => g).Select(g=>(g.Key,g.Count()));
var d3 = machines.Filter("model", "model3", FilterOperator.Equal)["age"]
                                    .GroupBy(g => g).Select(g=>(g.Key,g.Count()));
var d4 = machines.Filter("model", "model4", FilterOperator.Equal)["age"]
                                    .GroupBy(g => g).Select(g=>(g.Key,g.Count()));
//define bars
var b1 = new Graph.Bar(){ x = d1.Select(x=>x.Item1),y = d1.Select(x=>x.Item2),name = "model1"};
var b2 = new Graph.Bar(){ x = d2.Select(x=>x.Item1),y = d2.Select(x=>x.Item2),name = "model2"};
var b3 = new Graph.Bar(){ x = d3.Select(x=>x.Item1),y = d3.Select(x=>x.Item2),name = "model3"};
var b4 = new Graph.Bar(){ x = d4.Select(x=>x.Item1),y = d4.Select(x=>x.Item2),name = "model4"};
    
//Plot machine data
var chart = Chart.Plot(new[] {b1,b2,b3,b4});
var layout = new XPlot.Plotly.Layout.Layout() 
    { title = "Components Replacements",barmode="stack",
     xaxis=new XPlot.Plotly.Graph.Xaxis() { title="Machine Age" }, 
     yaxis = new XPlot.Plotly.Graph.Yaxis() { title = "Count" } };
//put layout into chart
chart.WithLayout(layout);

display(chart)

Predictive Maintenance

Failures

The Failures data represent the replacements of the components due to the failure of the machines. Once the failure is happened the machine is stopped. This is a crucial difference between Errors and Failures.

failures.Head()

Predictive Maintenance

//count number of failures  
var falValues = failures["failure"].GroupBy(v => v)
        .OrderBy(group => group.Key)
        .Select(group => Tuple.Create(group.Key, group.Count()));

//Plot Failure data
var chart = Chart.Plot(
    new Graph.Bar()
    {
       x = falValues.Select(x=>x.Item1),
       y = falValues.Select(x=>x.Item2),
      //  mode = "markers",  
    }
    
);
var layout = new XPlot.Plotly.Layout.Layout() 
    { title = "Failure Distribution across machines",
     xaxis=new XPlot.Plotly.Graph.Xaxis() { title="Component Name" }, 
     yaxis = new XPlot.Plotly.Graph.Yaxis() { title = "Number of components replaces" } };
//put layout into chart
chart.WithLayout(layout);

display(chart)

Predictive Maintenance

Feature Engineering

This section contains several feature engineering methods used to create features based on the machines’ properties.

Lagged Telemetry Features

First, we are going to create several lagged telemetry data, since telemetry data are classic time series data.

In the following, the rolling mean and standard deviation of the telemetry data over the last 3-hours lag window is calculated for every 3 hours.

//prepare rolling aggregation for each column for average values
var agg_curent = new Dictionary<string, Aggregation>()
 {
    { "datetime", Aggregation.Last }, { "volt", Aggregation.Last }, { "rotate", Aggregation.Last },
    { "pressure", Aggregation.Last },{ "vibration", Aggregation.Last }
  };
//prepare rolling aggregation for each column for average values
var agg_mean = new Dictionary<string, Aggregation>()
 {
    { "datetime", Aggregation.Last }, { "volt", Aggregation.Avg }, { "rotate", Aggregation.Avg },
    { "pressure", Aggregation.Avg },{ "vibration", Aggregation.Avg }
  };
//prepare rolling aggregation for each column for std values
var agg_std = new Dictionary<string, Aggregation>()
{
   { "datetime", Aggregation.Last }, { "volt", Aggregation.Std }, { "rotate", Aggregation.Std },
    { "pressure", Aggregation.Std },{ "vibration", Aggregation.Std }
};
//group Telemetry data by machine ID
var groupedTelemetry = telemetry.GroupBy("machineID");
//calculate rolling mean for grouped data for each 3 hours
var _3AvgValue = groupedTelemetry.Rolling(3, 3, agg_mean)
                 .Create(("machineID", null), ("datetime", null),("volt", "voltmean_3hrs"), ("rotate", "rotatemean_3hrs"),
                         ("pressure", "pressuremean_3hrs"), ("vibration", "vibrationmean_3hrs"));
//show head of the newely generated table
_3AvgValue.Head()

Predictive Maintenance

//calculate rolling std for grouped datat fro each 3 hours
var _3StdValue = groupedTelemetry.Rolling(3, 3, agg_mean)
                 .Create(("machineID", null), ("datetime", null),("volt", "voltsd_3hrs"), ("rotate", "rotatesd_3hrs"),
                         ("pressure", "pressuresd_3hrs"), ("vibration", "vibrationsd_3hrs"));
//show head of the new generated table
_3StdValue.Head()

For capturing a longer term effect 24 hours lag features we are going to calculate rolling avg and std.

//calculate rolling avg and std for each 24 hours
var _24AvgValue = groupedTelemetry.Rolling(24, 3, agg_mean)
                .Create(("machineID", null), ("datetime", null),
                        ("volt", "voltmean_24hrs"), ("rotate", "rotatemean_24hrs"),
                        ("pressure", "pressuremean_24hrs"), ("vibration", "vibrationmean_24hrs"));
var _24StdValue = groupedTelemetry.Rolling(24, 3, agg_std)
                .Create(("machineID", null), ("datetime", null),
                        ("volt", "voltsd_24hrs"), ("rotate", "rotatesd_24hrs"),
                        ("pressure", "pressuresd_24hrs"), ("vibration", "vibrationsd_24hrs"));

Merging telemetry features

Once we have rolling lag features calculated, we can merge them into one data frame:

//before merge all features create set of features from the current values for every 3 or 24 hours
DataFrame _1CurrentValue = groupedTelemetry.Rolling(3, 3, agg_curent)
                            .Create(("machineID", null), ("datetime", null),
                            ("volt", null), ("rotate", null), ("pressure", null), ("vibration", null));

Now that we have basic data frame merge previously calculated data frames with this one.

//merge all telemetry data frames into one
var mergeCols= new string[] { "machineID", "datetime" };
var df1 = _1CurrentValue.Merge(_3AvgValue, mergeCols, mergeCols, JoinType.Left, suffix: "df1");   
                                            
var df2 = df1.Merge(_24AvgValue, mergeCols, mergeCols, JoinType.Left, suffix: "df2");
                                 
var df3 = df2.Merge(_3StdValue, mergeCols, mergeCols, JoinType.Left, suffix: "df3");
                                
var df4 = df3.Merge(_24StdValue, mergeCols, mergeCols, JoinType.Left, suffix: "df4");

At the end of the merging process, select relevant columns.

//select final dataset for the telemetry
var telDF = df4["machineID","datetime","volt","rotate", "pressure", "vibration",
                 "voltmean_3hrs","rotatemean_3hrs","pressuremean_3hrs","vibrationmean_3hrs",
                 "voltmean_24hrs","rotatemean_24hrs","pressuremean_24hrs","vibrationmean_24hrs",
                 "voltsd_3hrs", "rotatesd_3hrs","pressuresd_3hrs","vibrationsd_3hrs",
                 "voltsd_24hrs", "rotatesd_24hrs","pressuresd_24hrs","vibrationsd_24hrs"];

//remove NANs
var telemetry_final = telDF.DropNA();

Now top 5 rows of final telemetry data looks like the following image:

telemetry_final.Head()

Predictive Maintenance

Lag Features from Errors

Unlike telemetry that had numerical values, errors have categorical values denoting the type of error that occurred at a time-stamp. We are going to aggregate categories of the error with different types of errors that occurred in the lag window.

First, encode the errors with One-Hot-Encoding:

var mlContext = new MLContext(seed:2019);
//One Hot Encoding of error column
var encodedErr = errors.EncodeColumn(mlContext, "errorID");

//sum duplicated errors by machine and date
var errors_aggs = new Dictionary<string, Aggregation>();
errors_aggs.Add("error1", Aggregation.Sum);
errors_aggs.Add("error2", Aggregation.Sum);
errors_aggs.Add("error3", Aggregation.Sum);
errors_aggs.Add("error4", Aggregation.Sum);
errors_aggs.Add("error5", Aggregation.Sum);

//group and sum duplicated errors
encodedErr =  encodedErr.GroupBy(new string[] { "machineID", "datetime" }).Aggregate(errors_aggs);

//
encodedErr = encodedErr.Create(("machineID", null), ("datetime", null),
                        ("error1", "error1sum"), ("error2", "error2sum"),
                        ("error3", "error3sum"), ("error4", "error4sum"), ("error5", "error5sum"));
encodedErr.Head()                       

Predictive Maintenance

// align errors with telemetry datetime values so that we can calculate aggregations
var er = telemetry.Merge(encodedErr,mergeCols, mergeCols, JoinType.Left, suffix: "error");
//
er = er["machineID","datetime", "error1sum", "error2sum", "error3sum", "error4sum", "error5sum"];
//fill missing values with 0
er.FillNA(0);
er.Head()   

Predictive Maintenance

//count the number of errors of different types in the last 24 hours, for every 3 hours
//define aggregation
var errors_aggs1 = new Dictionary<string, Aggregation>()
{
  { "datetime", Aggregation.Last },{ "error1sum", Aggregation.Sum }, { "error2sum", Aggregation.Sum }, 
  { "error3sum", Aggregation.Sum },{ "error4sum", Aggregation.Sum },
  { "error5sum", Aggregation.Sum }
};

//count the number of errors of different types in the last 24 hours,  for every 3 hours
var eDF = er.GroupBy(new string[] { "machineID"}).Rolling(24, 3, errors_aggs1);

//
var newdf=  eDF.DropNA();

var errors_final = newdf.Create(("machineID", null), ("datetime", null),
                        ("error1sum", "error1count"), ("error2sum", "error2count"),
                        ("error3sum", "error3count"), ("error4sum", "error4count"), ("error5sum", "error5count"));
errors_final.Head()

Predictive Maintenance

The Time Since Last Replacement

As the main task here is how to create a relevant feature in order to create a quality data set for the machine learning part. One of the good features would be the number of replacements of each component in the last 3 months to incorporate the frequency of replacements.

Furthermore, we can calculate how long it has been since a component is last replaced as that would be expected to correlate better with component failures since the longer a component is used, the more degradation should be expected. As first we are going to encode the maintenance table:

//One Hot Encoding of error column
var encMaint = maint.EncodeColumn(mlContext, "comp");
encMaint.Head()

Predictive Maintenance

//create separate data frames in order to calculate proper time since last replacement 
DataFrame dfComp1 = encMaint.Filter("comp1", 1, FilterOperator.Equal)["machineID", "datetime"];
DataFrame dfComp2 = encMaint.Filter("comp2", 1, FilterOperator.Equal)["machineID", "datetime"];;
DataFrame dfComp3 = encMaint.Filter("comp3", 1, FilterOperator.Equal)["machineID", "datetime"];;
DataFrame dfComp4 = encMaint.Filter("comp4", 1, FilterOperator.Equal)["machineID", "datetime"];;

dfComp4.Head()

Predictive Maintenance

//from telemetry data create helped data frame so we can calculate additional column from the maintenance data frame
var compData = telemetry_final.Create(("machineID", null), ("datetime", null));
%%time
//calculate new set of columns so that we have information the time since last replacement of each component separately
var newCols= new string[]{"sincelastcomp1","sincelastcomp2","sincelastcomp3","sincelastcomp4"};
var calcValues= new object[4];

//perform calculation
compData.AddCalculatedColumns(newCols,(row, i)=>
{
    var machineId = Convert.ToInt32(row["machineID"]);
    var date = Convert.ToDateTime(row["datetime"]);
    
    var maxDate1 = dfComp1.Filter("machineID", machineId, FilterOperator.Equal)["datetime"]
        .Where(x => (DateTime)x <= date).Select(x=>(DateTime)x).Max();
    var maxDate2 = dfComp2.Filter("machineID", machineId, FilterOperator.Equal)["datetime"]
        .Where(x => (DateTime)x <= date).Select(x=>(DateTime)x).Max();
    var maxDate3 = dfComp3.Filter("machineID", machineId, FilterOperator.Equal)["datetime"]
        .Where(x => (DateTime)x <= date).Select(x=>(DateTime)x).Max();
    var maxDate4 = dfComp4.Filter("machineID", machineId, FilterOperator.Equal)["datetime"]
        .Where(x => (DateTime)x <= date).Select(x=>(DateTime)x).Max();
        
    //perform calculation
    calcValues[0] = (date - maxDate1).TotalDays;
    calcValues[1] = (date - maxDate2).TotalDays;
    calcValues[2] = (date - maxDate3).TotalDays;
    calcValues[3] = (date - maxDate4).TotalDays;
    return calcValues;
});
Wall time: 178708.9764ms

Predictive Maintenance

var maintenance_final = compData;
maintenance_final.Head()

Machine Features

The machine data set contains descriptive information about machines like the type of machines and their ages which is the years in service.

machines.Head()

Predictive Maintenance

Joining features into final ML ready data set

As the last step in Feature engineering, we are performing merging all features into one data set.

var merge2Cols=new string[]{"machineID"};
var fdf1= telemetry_final.Merge(errors_final, mergeCols, mergeCols,JoinType.Left, suffix: "er");
var fdf2 = fdf1.Merge(maintenance_final, mergeCols,mergeCols,JoinType.Left, suffix: "mn");
var features_final = fdf2.Merge(machines, merge2Cols,merge2Cols,JoinType.Left, suffix: "ma");
features_final= features_final["datetime", "machineID", 
            "voltmean_3hrs", "rotatemean_3hrs", "pressuremean_3hrs", "vibrationmean_3hrs",
            "voltstd_3hrs", "rotatestd_3hrs", "pressurestd_3hrs", "vibrationstd_3hrs", 
            "voltmean_24hrs", "rotatemean_24hrs", "pressuremean_24hrs", "vibrationmean_24hrs", 
            "voltstd_24hrs","rotatestd_24hrs", "pressurestd_24hrs", "vibrationstd_24hrs", 
            "error1count", "error2count", "error3count", "error4count", "error5count", 
            "sincelastcomp1", "sincelastcomp2", "sincelastcomp3", "sincelastcomp4", 
            "model", "age"];
//

features_final.Head();
DataFrame.ToCsv("data/final_features.csv", features_final);

Define Label Column

The Label in prediction maintenance should be the probability that a machine will fail in the near future due to a failure certain component. If we take 24 hours to be a task for this problem, the label construction is consists of a new column in the feature data set which indicate if certain machine will fail or not in the next 24 hours due to failure one of several components.

With this way we are defining the label as a categorical variable containing: – none – if the machine will not fail in the next 24 hours, – comp1 to comp4

  • if the machine will fail in the next 24 hours due to the failure of certain components.

Since we can experiment with the label construction by applying different conditions, we can implement methods that take several arguments in order to define the general problem.

failures.Describe(false)

Predictive Maintenance

//constructing the label column which indicate if the current machine will 
//fail in the next `predTime` (24 hours as default) due to failur certain component.
//create final data frame from feature df
var finalDf = new DataFrame(features_final);

//group failures by machineID and datetime 
string[] cols = new string[] {  "machineID" , "datetime"};
var failDfgrp = failures.GroupBy(cols);

//Add failure column to  finalDF
var rV = new object[] { "none" };
finalDf.AddCalculatedColumns(new string[]{"failure"}, (object[] row, int i) => rV);

//create new data frame from featuresDF by grouping machineID and datatime
var featureDfGrouped = finalDf["datetime","machineID", "failure"].GroupBy(cols);

//now look for every failure and calculate if the machine will fail in the last 24 hours
//in case two or more components were failed for the ssame machine add new row in df
var failureDfExt = featureDfGrouped.Transform((xdf) =>
{
    //extract the row from featureDfGrouped
    var xdfRow = xdf[0].ToList();
    var refDate = (DateTime)xdfRow[0];
    var machineID = (int)xdfRow[1];

    //now look if the failure contains the machineID
    if(failDfgrp.Group2.ContainsKey(machineID))
    {
        //get the date and calculate total hours
        var dff = failDfgrp.Group2[machineID];

        foreach (var dfff in dff)
        {
            for (int i = 0; i < dfff.Value.RowCount(); i++)
            {
                //"datetime","machineID","failure"
                var frow = dfff.Value[i].ToList();
                var dft = (DateTime)frow[0];
                
                //if total hours is less or equal than 24 hours set component to the failure column
                var totHours = (dft - refDate).TotalHours;
                if (totHours <= 24 && totHours >=0)
                {
                    if (xdf.RowCount() > i)
                        xdf["failure", i] = frow[2];
                    else//in case two components were failed for the same machine and 
                        //at the same time, add new row with new component name
                    {
                        var r = xdf[0].ToList();
                        r[2] = frow[2];
                        xdf.AddRow(r);
                    }
                }
            }
        }
    }
    return xdf;
});

//Now merge extended failure Df with featureDF
var final_dataframe = finalDf.Merge(failureDfExt, cols, cols,JoinType.Left, "fail");

//define final set of columns
final_dataframe = final_dataframe["datetime", "machineID",
"voltmean_3hrs", "rotatemean_3hrs", "pressuremean_3hrs", "vibrationmean_3hrs",
"voltsd_3hrs", "rotatesd_3hrs", "pressuresd_3hrs", "vibrationsd_3hrs",
"voltmean_24hrs", "rotatemean_24hrs", "pressuremean_24hrs", "vibrationmean_24hrs",
"voltsd_24hrs", "rotatesd_24hrs", "pressuresd_24hrs", "vibrationsd_24hrs",
"error1count", "error2count", "error3count", "error4count", "error5count",
"sincelastcomp1", "sincelastcomp2", "sincelastcomp3", "sincelastcomp4",
"model", "age", "failure_fail"];

//rename column
final_dataframe.Rename(("failure_fail", "failure"));
//save the file data frame to disk
DataFrame.ToCsv("data/final_dataFrame.csv",final_dataframe);

Final Data Frame

Lets see how the final_dataframe looks like. It contains 24 columns. Most of the columns are numerical. The Model column is categorical and it should be encoded once we prepare the machine learning part.

Also the label column failure is categorical column containing 5 different categories: none, comp1, comp2, comp3 and comp4. We can also see the data set is not balance, since we have 2785705 none and the rest of the rows in total of 5923 other categories. This is typical unbalanced dataset, and we should be careful when evaluation models, because the model which returns always none value will have more than 97% of accuracy.

final_dataframe.Describe(false)

Predictive Maintenance

In the next part, we are going to implement the training and evaluation process of the Predictive Maintenance model. The full notebook for this blog post can be found here

C# Jupyter Notebook Part 2/n


What is .NET Jupyter Notebook

In this blog post, we are going to explore the main features in the new C# Juypter Notebook. For those who used Notebook from other programming languages like Python or R, this would be an easy task. First of all, the Notebook concept provides a quick, simple and straightforward way to present a mix of text and $ \Latex $, source code implementation and its output. This means you have a full-featured platform to write a paper or blog post, presentation slides, lecture notes, and other educated materials.

The notebook consists of cells, where a user can write code or markdown text. Once he completes the cell content confirmation for cell editing can be achieved by Ctrl+Enter or by press run button from the notebook toolbar. The image below shows the notebook toolbar, with a run button. The popup combo box shows the type of cell the user can define. In the case of text, Markdown should be selected, for writing source code the cell should be Code.

run button

To start writing code to C# Notebook, the first thing we should do is install NuGet packages or add assembly references and define using statements. In order to do that, the following code installs several nuget packages, and declare several using statements. But before writing code, we should add a new cell by pressing + toolbar button.

The first few NuGet packages are packages for ML.NET. Then we install the XPlot package for data visualization in .NET Notebook, and then we install a set of Daany packages for data analytics. First, we install Daany.DataFrame for data exploration and analysis, and then Daany.DataFrame.Ext set of extensions for data manipulation used with ML.NET.

//ML.NET Packages
#r "nuget:Microsoft.ML.LightGBM"
#r "nuget:Microsoft.ML"
#r "nuget:Microsoft.ML.DataView"

//Install XPlot package
#r "nuget:XPlot.Plotly"

//Install Daany.DataFrame 
#r "nuget:Daany.DataFrame"
#r "nuget:Daany.DataFrame.Ext"
using System;
using System.Linq;

//Daany data frame
using Daany;
using Daany.Ext;

//Plotting functionalities
using XPlot.Plotly;

//ML.NET using
using Microsoft.ML;
using Microsoft.ML.Data;
using Microsoft.ML.Trainers.LightGbm;

The output for the above code:

run button

Once the NuGet packages are installed successfully, we can start with data exploration. But before this declare few using statements:

We can define classes or methods globally. The following code implements the formatter method for displaying Daany.DataFrame in the output cell.

// Temporal DataFrame formatter for this early preview
using Microsoft.AspNetCore.Html;
Formatter<DataFrame>.Register((df, writer) =>
{
    var headers = new List<IHtmlContent>();
    headers.Add(th(i("index")));
    headers.AddRange(df.Columns.Select(c => (IHtmlContent) th(c)));
    
    //renders the rows
    var rows = new List<List<IHtmlContent>>();
    var take = 20;
    
    //
    for (var i = 0; i < Math.Min(take, df.RowCount()); i++)
    {
        var cells = new List<IHtmlContent>();
        cells.Add(td(df.Index[i]));
        foreach (var obj in df[i])
        {
            cells.Add(td(obj));
        }
        rows.Add(cells);
    }
    
    var t = table(
        thead(
            headers),
        tbody(
            rows.Select(
                r => tr(r))));
    
    writer.Write(t);
}, "text/html");

For this demo we will used famous Iris data set. We will download the file from the internet, load it by using Daany.DataFrame, a display few first rows. In order to do that we run the folloing code:

var url = "https://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data";
var cols = new string[] {"sepal_length","sepal_width", "petal_length", "petal_width", "flower_type"};
var df = DataFrame.FromWeb(url, sep:',',names:cols);
df.Head(5)

The output looks like this: run button

As can be seen, the last line from the previous code has no semicolon, which means the line should be displayed in the output cell. Let’s move on, and implement two new columns. The new columns will be sepal and petal area for the flower. The expression we are going to use is:

$$ PetalArea = petal_width \cdot petal_length;\ SepalArea = sepal_width \cdot sepal_length; $$

As can be seen, the $\LaTeX$ is fully supported in the notebook.

The above formulea is implemented in the following code:

//calculate two new columns into dataset
df.AddCalculatedColumn("SepalArea", (r, i) => Convert.ToSingle(r["sepal_width"]) * Convert.ToSingle(r["sepal_length"]));
df.AddCalculatedColumn("PetalArea", (r, i) => Convert.ToSingle(r["petal_width"]) * Convert.ToSingle(r["petal_length"]));
df.Head(5)

run button

The data frame has two new columns. They indicate the area for the flower. In order to see basic statistics parameters for each of the defined columns, we call Describe method.

//see descriptive stats of the final ds
df.Describe(false)

run button

From the table above, we can see the flower column has only 3 values. The most frequent value has a frequency equal to 50, which is an indicator of a balanced dataset.

Data visualization

The most powerful feature in Notebook is a data visualization. In this section, we are going to plot some interesting charts.

In order to see how sepal and petal areas are spread in 2D plane, the following plot is implemented:

//plot the data in order to see how areas are spread in the 2d plane
//XPlot Histogram reference: http://tpetricek.github.io/XPlot/reference/xplot-plotly-graph-histogram.html

var faresHistogram = Chart.Plot(new Graph.Histogram(){x = df["flower_type"], autobinx = false, nbinsx = 20});
var layout = new Layout.Layout(){title="Distribution of iris flower"};
faresHistogram.WithLayout(layout);
display(faresHistogram);

run button

The chart is also an indication of a balanced dataset.

Now lets plot areas depending on the flower type:

// Plot Sepal vs. Petal area with flower type

var chart = Chart.Plot(
    new Graph.Scatter()
    {
        x = df["SepalArea"],
        y = df["PetalArea"],
        mode = "markers",
        marker = new Graph.Marker()
        {
            color = df["flower_type"].Select(x=>x.ToString()=="Iris-virginica"?1:(x.ToString()=="Iris-versicolor"?2:3)),
            colorscale = "Jet"
        }
    }
);

var layout = new Layout.Layout(){title="Plot Sepal vs. Petal Area & color scale on flower type"};
chart.WithLayout(layout);
chart.WithLegend(true);
chart.WithLabels(new string[3]{"Iris-virginica","Iris-versicolor", "Iris-setosa"});
chart.WithXTitle("Sepal Area");
chart.WithYTitle("Petal Area");
chart.Width = 800;
chart.Height = 400;

display(chart);

run button

As can be seen from the chart above, flower types are separated almost linearly, since we used petal and sepal areas instead of width and length. With this transformation, we can get a 100% accurate ml model.

Machine Learning

Once we finished with data transformation and visualization we can define the final data frame before machine learning application. To end this we are going to select only two columns for features and one label column which will be flower type.

//create new data-frame by selecting only three columns
var derivedDF = df["SepalArea","PetalArea","flower_type"];
derivedDF.Head(5)

run button

Since we are going to use ML.NET, we need to declare Iris in order to load the data into ML.NET.

//Define an Iris class for machine learning.
class Iris
{
    public float PetalArea { get; set; }
    public float SepalArea { get; set; }
    public string Species { get; set; }
}
//Create ML COntext
MLContext mlContext = new MLContext(seed:2019);

Then load the data from Daany data frame into ML.NET:

//Load Data Frame into Ml.NET data pipeline
IDataView dataView = mlContext.Data.LoadFromEnumerable<Iris>(derivedDF.GetEnumerator<Iris>((oRow) =>
{
    //convert row object array into Iris row

    var prRow = new Iris();
    prRow.SepalArea = Convert.ToSingle(oRow["SepalArea"]);
    prRow.PetalArea = Convert.ToSingle(oRow["PetalArea"]);
    prRow.Species = Convert.ToString(oRow["flower_type"]);
    //
    return prRow;
}));

Once we have data, we can split it into train and test sets:

//Split dataset in two parts: TrainingDataset (80%) and TestDataset (20%)
var trainTestData = mlContext.Data.TrainTestSplit(dataView, testFraction: 0.2);
var trainData = trainTestData.TrainSet;
var testData = trainTestData.TestSet;

The next step in prepare the data for training is define pipeline for dtaa transformation and feature engineering:

//one encoding output category column by defining KeyValues for each category
IEstimator<ITransformer> dataPipeline =
mlContext.Transforms.Conversion.MapValueToKey(outputColumnName: "Label", inputColumnName: nameof(Iris.Species))

//define features columns
.Append(mlContext.Transforms.Concatenate("Features",nameof(Iris.SepalArea), nameof(Iris.PetalArea)));

Once we completes the preparation paert, we can perform the training part. The training start by calling Fit to the pipeline:

%%time
 // Define LightGbm algorithm estimator
IEstimator<ITransformer> lightGbm = mlContext.MulticlassClassification.Trainers.LightGbm();
//train the ML model
TransformerChain<ITransformer> model = dataPipeline.Append(lightGbm).Fit(trainData);

Once the training is completes, we have trained model which can be evaluated. In order to print the evaluation result with formatting, we are going to install Daany DataFrame extension which has implementation of printing results


//evaluate train set
var predictions = model.Transform(trainData);
var metricsTrain = mlContext.MulticlassClassification.Evaluate(predictions);
ConsoleHelper.PrintMultiClassClassificationMetrics("TRAIN Iris DataSet", metricsTrain);
ConsoleHelper.ConsoleWriteHeader("Train Iris DataSet Confusion Matrix ");
ConsoleHelper.ConsolePrintConfusionMatrix(metricsTrain.ConfusionMatrix);

run button

//evaluate test set
var testPrediction = model.Transform(testData);
var metricsTest = mlContext.MulticlassClassification.Evaluate(testPrediction);
ConsoleHelper.PrintMultiClassClassificationMetrics("TEST Iris Dataset", metricsTest);
ConsoleHelper.ConsoleWriteHeader("Test Iris DataSet Confusion Matrix ");
ConsoleHelper.ConsolePrintConfusionMatrix(metricsTest.ConfusionMatrix);

run button

As can be seen, we have a 100% accurate model for Iris flower recognition. Now, let’s add a new column into the data frame called Prediction to have a model prediction in the data frame.
In order to do that, we are evaluating the model on the train and the test data set. Once we have a prediction for both sets, we can join them and add as a separate column in Daany data frame. The following code does exactly what we described previously.

var flowerLabels = DataFrameExt.GetLabels(predictions.Schema).ToList();
var p1 = predictions.GetColumn<uint>("PredictedLabel").Select(x=>(int)x).ToList();
var p2 = testPrediction.GetColumn<uint>("PredictedLabel").Select(x => (int)x).ToList();
//join train and test
p1.AddRange(p2);
var p = p1.Select(x => (object)flowerLabels[x-1]).ToList();
//add new column into df
var dic = new Dictionary<string, List<object>> { { "PredictedLabel", p } };
var dff = derivedDF.AddColumns(dic);
dff.Head()

run button The output above shows the first few rows in the data frame. To see the few last rows from the data frame we call a Tail method.

dff.Tail()

run button

In this blog post, we saw how can we be more productive when using .NET Jupyter Notebook with Machine Learning and Data Exploration and transformation, by using ML.NET and Daany – DAtaANalYtics library. Complete source code for this notebook can be found at GitHub repo: https://github.com/bhrnjica/notebooks/blob/master/net_jupyter_notebook_part2.ipynb