Daany Library version 0.7.0 – brings new set of high performance routines for Linear Algebra

The last years the Daany library got lot of attention from the community since it contains the implementation which is crucial for Machine Learning and Data transformation on .NET. There are plenty of great .NET libraries on GitHub but the Daany contains very specific set of components which make it special.

Today I am happy to announce the new package within the Daany library called Daany.LinA – the package for linear algebra. The package is .NET wrapper about MKL LAPACK and BLASS routines. With combination of DataFrame, Daany.Stat and Daany.LinA you can build high performance code for data analytics. For more inforamtion please visit http://github.com/bhrnjica/daany and check Developer Guide, and unit tests.

Introduction

Few weeks ago, I was doing research and I needed a fast program for Singular Value Decomposition. I have SVD implementation in my open source project called Daany which is using the SVD implementation of Accord.NET great Machine Learning Framework. However, the decomposition is working fine and smooth for small matrices with few hundreds rows/cols but for matrices with more than 500 rows and columns it is pretty slow. So I was forced to think about of using different library in order to speed up the SVD calculation. I could use some of python libraries eg. TensorFlow, PyTorch or SciPy or similar libraries from R and c++. I have used such libraries and I know how they are fast. But I still wanted to have approximately same speed on .NET as well.

Then I decided to look how can I use some of available c++ based libraries. Once I switch to c++ based project I would not be able to use .NET framework where other parts of my research are implemented. So only solution was to implement a wrapper around a c++ library and use pInvoke in order to expose required methods in C# code.

The first idea was to use LAPACK/BLAS numerical library to calculate not only SVD but whole set of Linear Algebra routines. LAPACK/BLAS libraries have long history back to 70s of the 20th century. They are proved to be very fast and reliable. However they are not supported for GPU.

Then I came to MAGMA which is nothing but LAPACK for GPU. MAGMA is very complex and fast library which requires CUDA. However if the machine has no CUDA, the library cannot be used.

The I decided to use hybrid approach and use MAGMA whenever the machine has CUDA, otherwise use LAPACK as computation engine. This approach is the most complex and required advance skills in C++ and C#. So after a more than a month of the implementation the MagmaSharp is published as GitHub open source project with the fist public release MagmaSharp 0.02.01 at Nuget.org.

MagmaSharp v0.02.01

The first release of MagmaSharp supports MAGMA Device routines: Currently the library supports MAGMA driver routines for general rectangular matrix:

1. gesv – solve linear system, AX = B, A is general non-symetric matrix,
2. gels – least square solve, AX = B, A is rectangular,
3. geev – eigen value solver for non-symetric matrix, $AX = X \lambda$
4. gesvd– singular value decomposition (SVD), $A = U \sigma V^T$.

The library supports float and double value types.

Software requirements

The project is build on .NET Core 3.1 and .NET Standard 2.1. It is built and tested on Windows 10 1909 only.

Software (Native Libraries) requirements

In order to compile, build and use the library the following native libraries are needed to be installed.

However, if you install the MagmaSharp as Nuget package, both libraries are included, so you don’t have to install it.

How to use MagmaSharp

MagmaSharp is packed as Nuget and can be added to your .NET project as ordinary .NET component. You don’t have to worry about native libraries and dependencies. Everything is included in the package. The package can be installed from this link, or just search for MagmaSharp.

How to Build MagmaSharp from the source

1. Download the MagmaSharp source code from the GitHub page.

2. Reference Magma static library and put it to folder MagmaLib. Magma static library can be downloaded and built from the Official site.

3. Open ‘MagmaSharp.sln’ with Visual Studio 2019.

4. Make sure the building architecture is x64.

5. Restore Nuget packages.

6. Build and run the Solution.

How to start with MagmaSharp

The best way to start with MahmaSharp is to take a look at the MagmaSharp.XUnit project, there is a small example how to use each of the implemented method with or without CUDA device.

Predictive Maintenance on .NET Platform

Summary

However, this notebook is completely implemented on .NET platform using:

• C# Jupyter Notebook,- Jupyter Notebook experience with C# and .NET,
• ML.NET – Microsoft open source framework for machine learning, and
• DaanyDAta ANalYtics open source library for data analytics. It can be installed as Nuget package.

There are small differences between this notebook and the notebooks at the official azure gallery portal, but in most cases, the code follows the steps defined there. The purpose of this notebook is to demonstrate how to use .NET Jupyter Notebook with Daany.DataFrame and ML.NET in order to prepare the data and build the Predictive Maintenance Model on .NET platform. But first lets see what is Predictive Maintenance and why is it important.

Quick Introduction to Predictive Maintenance

Simply speaking it is a technique to determine (predict) the failure of the machine component in the near future so that the component can be replaced based on the maintenance plan before it fails and stop the production process. The Predictive maintenance can improve the production process and increase the productivity. By successfully handling with predictive maintenance we are able to achieve the following goals:

• reduce the operational risk of mission-critical equipment

• control cost of maintenance by enabling just-in-time maintenance operations

• discover patterns connected to various maintenance problems

• provide Key Performance Indicators.

The following image shows different type of maintenance in the production.

Predictive maintenance data collection

In order to handle and use this technique we need a various data from the production, including but not limited to:

• telemetry data from the observed machines (vibration, voltage, temperature etc)
• errors and logs data relevant to each machine,
• failure data, when a certain component is replaced, etc
• quality and accuracy data, machine properties, models, age etc.

3 Steps in Predictive Maintenance

Usually, every Predictive Maintenance technique should proceed by the following 3 main steps:

1. Collect Data – collect all possible descriptions,historical and real-time data, usually by using IOT devices, various loggers, technical documentation, etc.

2. Predict Failures – collected data can be used and transformed into machine learning ready data sets, and build a machine learning model to predict the failures of the components in the set of machines in the production.

3. React – by obtaining the information which components will fail in the near future, we can activate the process of replacement so the component will be replaced before it fails, and the production process will not be interrupted.

Predict Failures

In this article, the second step will be presented, which will be related to data preparation. In order to predict failures in the production process, a set of data transformations, cleaning, feature engineering, and selection must be performed to prepare the data for building a machine learning model. The data preparation part plays a crucially step in the model building since a quality data preparation will directly reflect on the model accuracy and reliability.

Software requirements

In this article, the complete procedure in data preparation is presented. The whole process is performed using:

• .NET Core 3.1 – the latest .NET platform version,

• .NET Jupyter Notebook– .NET implementation of popular Jupyer Notebook,

• ML.NET – Microsoft open-source framework for Machine Learning on .NET Platform and

• DaanyDAta ANalYtics library. It can be found at Github but also as Nuget package.

Notebook preparation

In order to complete this task, we should install several Nuget packages and include several using keywords. The following code block shows the using keywords, and additional code related to notebook output format.

Note: nuget package installation must be in the first cell of the Notebook, otherwise the notebook will not work as expected. Hope this will be changed once the final version would be released.

//using Microsoft.ML.Data;
using XPlot.Plotly;
using System;
using System.Collections.Generic;
using System.Drawing;
using System.Linq;
//using statement of Daany package
using Daany;
using Daany.MathStuff;
using Daany.Ext;
//
using Microsoft.ML;

//DataFrame formatter
using Microsoft.AspNetCore.Html;
Formatter<DataFrame>.Register((df, writer) =>
{
//renders the rows
var rows = new List<List<IHtmlContent>>();
var take = 20;
//
for (var i = 0; i < Math.Min(take, df.RowCount()); i++)
{
var cells = new List<IHtmlContent>();
foreach (var obj in df[i])
{
}
}
var t = table(
tbody(rows.Select(r => tr(r))));

writer.Write(t);
}, "text/html");


In order to start with data preparation, we need data. The data can be found at Azure blob storage. The data is maintained by Azure Gallery Article.

Once the data are downloaded from the blob storage, they will not be downloaded again and they will be used as local copies.

The Data

The data we are using for predictive maintenance can be classified to:

• telemetry – which collects historical data about machine behavior (voltage, vibration, etc)
• errors – the data about warnings and errors in the machines
• maint – data about replacement and maintenance for the machines,
• machines – descriptive information about the machines,
• failures – data when a certain machine is stopped, due to component failure.

We load all the files in order to fully prepare data for the training process. The following code sample loads the data in to application memory.

%%time
//Load ALL 5 data frame files
//DataFrame Cols: datetime,machineID,volt,rotate,pressure,vibration
var telemetry = DataFrame.FromCsv("data/PdM_telemetry.csv", dformat: "yyyy-mm-dd hh:mm:ss");
var errors = DataFrame.FromCsv("data/PdM_errors.csv", dformat: "yyyy-mm-dd hh:mm:ss");
var maint = DataFrame.FromCsv("data/PdM_maint.csv", dformat: "yyyy-mm-dd hh:mm:ss");
var failures = DataFrame.FromCsv("data/PdM_failures.csv", dformat: "yyyy-mm-dd hh:mm:ss");
var machines = DataFrame.FromCsv("data/PdM_machines.csv", dformat: "yyyy-mm-dd hh:mm:ss");


Telemetry

The first data source is the telemetry data about machines. It consists of voltage, rotation, pressure, and vibration measurements measured from 100 machines in real-time hourly. The time period the data has been collected is during the year 2015. The following data shows the first 10 records in the dataset.

A description of the whole dataset is shown on the next cell. As can be seen, we have nearly million records for the machines, which is good starting point for the analysis.

In case we want to see the visualization of the telemetry data, we can select on of several column and show it.

Errors

One of the most important information in every Predictive Maintenance system is Error data. Actually errors are non-breaking recorded events while the machine is still operational. The error date and times are rounded to the closest hour since the telemetry data is collected at an hourly rate.

errors.Head()


//count number of errors
var barValue = errors["errorID"].GroupBy(v => v)
.OrderBy(group => group.Key)
.Select(group => Tuple.Create(group.Key, group.Count()));

//Plot Errors data
var chart = Chart.Plot(
new Graph.Bar()
{
x = barValue.Select(x=>x.Item1),
y = barValue.Select(x=>x.Item2),
//  mode = "markers",
}

);
var layout = new XPlot.Plotly.Layout.Layout()
{ title = "Error distribution",
xaxis=new XPlot.Plotly.Graph.Xaxis() { title="Error name" },
yaxis = new XPlot.Plotly.Graph.Yaxis() { title = "Error Count" } };
//put layout into chart
chart.WithLayout(layout);

display(chart)


Maintenance

The Maintenance is the next PrM component which tells us about scheduled and unscheduled maintenance. The maintenance contains the records which correspond to both regular inspection of components as well as failures. To add the record into the maintenance table a component must be replaced during the scheduled inspection or replaced due to a breakdown. In case the records are created due to breakdowns are called failures. Maintenance contains the data from 2014 and 2015 years.

maint.Head()


Machines

The data include information about 100 machines which are subject of the Predictive Maintenance analysis. The information includes: model type, and machine age. Distribution of the machine age categorized by the models across production process is shown in the following image:

//Distribution of models across age
var d1 = machines.Filter("model", "model1", FilterOperator.Equal)["age"]
.GroupBy(g => g).Select(g=>(g.Key,g.Count()));
var d2 = machines.Filter("model", "model2", FilterOperator.Equal)["age"]
.GroupBy(g => g).Select(g=>(g.Key,g.Count()));
var d3 = machines.Filter("model", "model3", FilterOperator.Equal)["age"]
.GroupBy(g => g).Select(g=>(g.Key,g.Count()));
var d4 = machines.Filter("model", "model4", FilterOperator.Equal)["age"]
.GroupBy(g => g).Select(g=>(g.Key,g.Count()));
//define bars
var b1 = new Graph.Bar(){ x = d1.Select(x=>x.Item1),y = d1.Select(x=>x.Item2),name = "model1"};
var b2 = new Graph.Bar(){ x = d2.Select(x=>x.Item1),y = d2.Select(x=>x.Item2),name = "model2"};
var b3 = new Graph.Bar(){ x = d3.Select(x=>x.Item1),y = d3.Select(x=>x.Item2),name = "model3"};
var b4 = new Graph.Bar(){ x = d4.Select(x=>x.Item1),y = d4.Select(x=>x.Item2),name = "model4"};

//Plot machine data
var chart = Chart.Plot(new[] {b1,b2,b3,b4});
var layout = new XPlot.Plotly.Layout.Layout()
{ title = "Components Replacements",barmode="stack",
xaxis=new XPlot.Plotly.Graph.Xaxis() { title="Machine Age" },
yaxis = new XPlot.Plotly.Graph.Yaxis() { title = "Count" } };
//put layout into chart
chart.WithLayout(layout);

display(chart)


Failures

The Failures data represent the replacements of the components due to the failure of the machines. Once the failure is happened the machine is stopped. This is a crucial difference between Errors and Failures.

failures.Head()


//count number of failures
var falValues = failures["failure"].GroupBy(v => v)
.OrderBy(group => group.Key)
.Select(group => Tuple.Create(group.Key, group.Count()));

//Plot Failure data
var chart = Chart.Plot(
new Graph.Bar()
{
x = falValues.Select(x=>x.Item1),
y = falValues.Select(x=>x.Item2),
//  mode = "markers",
}

);
var layout = new XPlot.Plotly.Layout.Layout()
{ title = "Failure Distribution across machines",
xaxis=new XPlot.Plotly.Graph.Xaxis() { title="Component Name" },
yaxis = new XPlot.Plotly.Graph.Yaxis() { title = "Number of components replaces" } };
//put layout into chart
chart.WithLayout(layout);

display(chart)


Feature Engineering

This section contains several feature engineering methods used to create features based on the machines’ properties.

Lagged Telemetry Features

First, we are going to create several lagged telemetry data, since telemetry data are classic time series data.

In the following, the rolling mean and standard deviation of the telemetry data over the last 3-hours lag window is calculated for every 3 hours.

//prepare rolling aggregation for each column for average values
var agg_curent = new Dictionary<string, Aggregation>()
{
{ "datetime", Aggregation.Last }, { "volt", Aggregation.Last }, { "rotate", Aggregation.Last },
{ "pressure", Aggregation.Last },{ "vibration", Aggregation.Last }
};
//prepare rolling aggregation for each column for average values
var agg_mean = new Dictionary<string, Aggregation>()
{
{ "datetime", Aggregation.Last }, { "volt", Aggregation.Avg }, { "rotate", Aggregation.Avg },
{ "pressure", Aggregation.Avg },{ "vibration", Aggregation.Avg }
};
//prepare rolling aggregation for each column for std values
var agg_std = new Dictionary<string, Aggregation>()
{
{ "datetime", Aggregation.Last }, { "volt", Aggregation.Std }, { "rotate", Aggregation.Std },
{ "pressure", Aggregation.Std },{ "vibration", Aggregation.Std }
};

//group Telemetry data by machine ID
var groupedTelemetry = telemetry.GroupBy("machineID");

//calculate rolling mean for grouped data for each 3 hours
var _3AvgValue = groupedTelemetry.Rolling(3, 3, agg_mean)
.Create(("machineID", null), ("datetime", null),("volt", "voltmean_3hrs"), ("rotate", "rotatemean_3hrs"),
("pressure", "pressuremean_3hrs"), ("vibration", "vibrationmean_3hrs"));
//show head of the newely generated table


//calculate rolling std for grouped datat fro each 3 hours
var _3StdValue = groupedTelemetry.Rolling(3, 3, agg_mean)
.Create(("machineID", null), ("datetime", null),("volt", "voltsd_3hrs"), ("rotate", "rotatesd_3hrs"),
("pressure", "pressuresd_3hrs"), ("vibration", "vibrationsd_3hrs"));
//show head of the new generated table


For capturing a longer term effect 24 hours lag features we are going to calculate rolling avg and std.

//calculate rolling avg and std for each 24 hours
var _24AvgValue = groupedTelemetry.Rolling(24, 3, agg_mean)
.Create(("machineID", null), ("datetime", null),
("volt", "voltmean_24hrs"), ("rotate", "rotatemean_24hrs"),
("pressure", "pressuremean_24hrs"), ("vibration", "vibrationmean_24hrs"));
var _24StdValue = groupedTelemetry.Rolling(24, 3, agg_std)
.Create(("machineID", null), ("datetime", null),
("volt", "voltsd_24hrs"), ("rotate", "rotatesd_24hrs"),
("pressure", "pressuresd_24hrs"), ("vibration", "vibrationsd_24hrs"));


Merging telemetry features

Once we have rolling lag features calculated, we can merge them into one data frame:

//before merge all features create set of features from the current values for every 3 or 24 hours
DataFrame _1CurrentValue = groupedTelemetry.Rolling(3, 3, agg_curent)
.Create(("machineID", null), ("datetime", null),
("volt", null), ("rotate", null), ("pressure", null), ("vibration", null));


Now that we have basic data frame merge previously calculated data frames with this one.

//merge all telemetry data frames into one
var mergeCols= new string[] { "machineID", "datetime" };
var df1 = _1CurrentValue.Merge(_3AvgValue, mergeCols, mergeCols, JoinType.Left, suffix: "df1");

var df2 = df1.Merge(_24AvgValue, mergeCols, mergeCols, JoinType.Left, suffix: "df2");

var df3 = df2.Merge(_3StdValue, mergeCols, mergeCols, JoinType.Left, suffix: "df3");

var df4 = df3.Merge(_24StdValue, mergeCols, mergeCols, JoinType.Left, suffix: "df4");


At the end of the merging process, select relevant columns.

//select final dataset for the telemetry
var telDF = df4["machineID","datetime","volt","rotate", "pressure", "vibration",
"voltmean_3hrs","rotatemean_3hrs","pressuremean_3hrs","vibrationmean_3hrs",
"voltmean_24hrs","rotatemean_24hrs","pressuremean_24hrs","vibrationmean_24hrs",
"voltsd_3hrs", "rotatesd_3hrs","pressuresd_3hrs","vibrationsd_3hrs",
"voltsd_24hrs", "rotatesd_24hrs","pressuresd_24hrs","vibrationsd_24hrs"];

//remove NANs
var telemetry_final = telDF.DropNA();


Now top 5 rows of final telemetry data looks like the following image:

telemetry_final.Head()


Lag Features from Errors

Unlike telemetry that had numerical values, errors have categorical values denoting the type of error that occurred at a time-stamp. We are going to aggregate categories of the error with different types of errors that occurred in the lag window.

First, encode the errors with One-Hot-Encoding:

var mlContext = new MLContext(seed:2019);
//One Hot Encoding of error column
var encodedErr = errors.EncodeColumn(mlContext, "errorID");

//sum duplicated errors by machine and date
var errors_aggs = new Dictionary<string, Aggregation>();

//group and sum duplicated errors
encodedErr =  encodedErr.GroupBy(new string[] { "machineID", "datetime" }).Aggregate(errors_aggs);

//
encodedErr = encodedErr.Create(("machineID", null), ("datetime", null),
("error1", "error1sum"), ("error2", "error2sum"),
("error3", "error3sum"), ("error4", "error4sum"), ("error5", "error5sum"));


// align errors with telemetry datetime values so that we can calculate aggregations
var er = telemetry.Merge(encodedErr,mergeCols, mergeCols, JoinType.Left, suffix: "error");
//
er = er["machineID","datetime", "error1sum", "error2sum", "error3sum", "error4sum", "error5sum"];
//fill missing values with 0
er.FillNA(0);


//count the number of errors of different types in the last 24 hours, for every 3 hours
//define aggregation
var errors_aggs1 = new Dictionary<string, Aggregation>()
{
{ "datetime", Aggregation.Last },{ "error1sum", Aggregation.Sum }, { "error2sum", Aggregation.Sum },
{ "error3sum", Aggregation.Sum },{ "error4sum", Aggregation.Sum },
{ "error5sum", Aggregation.Sum }
};

//count the number of errors of different types in the last 24 hours,  for every 3 hours
var eDF = er.GroupBy(new string[] { "machineID"}).Rolling(24, 3, errors_aggs1);

//
var newdf=  eDF.DropNA();

var errors_final = newdf.Create(("machineID", null), ("datetime", null),
("error1sum", "error1count"), ("error2sum", "error2count"),
("error3sum", "error3count"), ("error4sum", "error4count"), ("error5sum", "error5count"));


The Time Since Last Replacement

As the main task here is how to create a relevant feature in order to create a quality data set for the machine learning part. One of the good features would be the number of replacements of each component in the last 3 months to incorporate the frequency of replacements.

Furthermore, we can calculate how long it has been since a component is last replaced as that would be expected to correlate better with component failures since the longer a component is used, the more degradation should be expected. As first we are going to encode the maintenance table:

//One Hot Encoding of error column
var encMaint = maint.EncodeColumn(mlContext, "comp");


//create separate data frames in order to calculate proper time since last replacement
DataFrame dfComp1 = encMaint.Filter("comp1", 1, FilterOperator.Equal)["machineID", "datetime"];
DataFrame dfComp2 = encMaint.Filter("comp2", 1, FilterOperator.Equal)["machineID", "datetime"];;
DataFrame dfComp3 = encMaint.Filter("comp3", 1, FilterOperator.Equal)["machineID", "datetime"];;
DataFrame dfComp4 = encMaint.Filter("comp4", 1, FilterOperator.Equal)["machineID", "datetime"];;



//from telemetry data create helped data frame so we can calculate additional column from the maintenance data frame
var compData = telemetry_final.Create(("machineID", null), ("datetime", null));

%%time
//calculate new set of columns so that we have information the time since last replacement of each component separately
var newCols= new string[]{"sincelastcomp1","sincelastcomp2","sincelastcomp3","sincelastcomp4"};
var calcValues= new object[4];

//perform calculation
{
var machineId = Convert.ToInt32(row["machineID"]);
var date = Convert.ToDateTime(row["datetime"]);

var maxDate1 = dfComp1.Filter("machineID", machineId, FilterOperator.Equal)["datetime"]
.Where(x => (DateTime)x <= date).Select(x=>(DateTime)x).Max();
var maxDate2 = dfComp2.Filter("machineID", machineId, FilterOperator.Equal)["datetime"]
.Where(x => (DateTime)x <= date).Select(x=>(DateTime)x).Max();
var maxDate3 = dfComp3.Filter("machineID", machineId, FilterOperator.Equal)["datetime"]
.Where(x => (DateTime)x <= date).Select(x=>(DateTime)x).Max();
var maxDate4 = dfComp4.Filter("machineID", machineId, FilterOperator.Equal)["datetime"]
.Where(x => (DateTime)x <= date).Select(x=>(DateTime)x).Max();

//perform calculation
calcValues[0] = (date - maxDate1).TotalDays;
calcValues[1] = (date - maxDate2).TotalDays;
calcValues[2] = (date - maxDate3).TotalDays;
calcValues[3] = (date - maxDate4).TotalDays;
return calcValues;
});

Wall time: 178708.9764ms


var maintenance_final = compData;


Machine Features

The machine data set contains descriptive information about machines like the type of machines and their ages which is the years in service.

machines.Head()


Joining features into final ML ready data set

As the last step in Feature engineering, we are performing merging all features into one data set.

var merge2Cols=new string[]{"machineID"};
var fdf1= telemetry_final.Merge(errors_final, mergeCols, mergeCols,JoinType.Left, suffix: "er");
var fdf2 = fdf1.Merge(maintenance_final, mergeCols,mergeCols,JoinType.Left, suffix: "mn");
var features_final = fdf2.Merge(machines, merge2Cols,merge2Cols,JoinType.Left, suffix: "ma");

features_final= features_final["datetime", "machineID",
"voltmean_3hrs", "rotatemean_3hrs", "pressuremean_3hrs", "vibrationmean_3hrs",
"voltstd_3hrs", "rotatestd_3hrs", "pressurestd_3hrs", "vibrationstd_3hrs",
"voltmean_24hrs", "rotatemean_24hrs", "pressuremean_24hrs", "vibrationmean_24hrs",
"voltstd_24hrs","rotatestd_24hrs", "pressurestd_24hrs", "vibrationstd_24hrs",
"error1count", "error2count", "error3count", "error4count", "error5count",
"sincelastcomp1", "sincelastcomp2", "sincelastcomp3", "sincelastcomp4",
"model", "age"];
//

DataFrame.ToCsv("data/final_features.csv", features_final);


Define Label Column

The Label in prediction maintenance should be the probability that a machine will fail in the near future due to a failure certain component. If we take 24 hours to be a task for this problem, the label construction is consists of a new column in the feature data set which indicate if certain machine will fail or not in the next 24 hours due to failure one of several components.

With this way we are defining the label as a categorical variable containing: – none – if the machine will not fail in the next 24 hours, – comp1 to comp4

• if the machine will fail in the next 24 hours due to the failure of certain components.

Since we can experiment with the label construction by applying different conditions, we can implement methods that take several arguments in order to define the general problem.

failures.Describe(false)


//constructing the label column which indicate if the current machine will
//fail in the next predTime (24 hours as default) due to failur certain component.
//create final data frame from feature df
var finalDf = new DataFrame(features_final);

//group failures by machineID and datetime
string[] cols = new string[] {  "machineID" , "datetime"};
var failDfgrp = failures.GroupBy(cols);

var rV = new object[] { "none" };
finalDf.AddCalculatedColumns(new string[]{"failure"}, (object[] row, int i) => rV);

//create new data frame from featuresDF by grouping machineID and datatime
var featureDfGrouped = finalDf["datetime","machineID", "failure"].GroupBy(cols);

//now look for every failure and calculate if the machine will fail in the last 24 hours
//in case two or more components were failed for the ssame machine add new row in df
var failureDfExt = featureDfGrouped.Transform((xdf) =>
{
//extract the row from featureDfGrouped
var xdfRow = xdf[0].ToList();
var refDate = (DateTime)xdfRow[0];
var machineID = (int)xdfRow[1];

//now look if the failure contains the machineID
if(failDfgrp.Group2.ContainsKey(machineID))
{
//get the date and calculate total hours
var dff = failDfgrp.Group2[machineID];

foreach (var dfff in dff)
{
for (int i = 0; i < dfff.Value.RowCount(); i++)
{
//"datetime","machineID","failure"
var frow = dfff.Value[i].ToList();
var dft = (DateTime)frow[0];

//if total hours is less or equal than 24 hours set component to the failure column
var totHours = (dft - refDate).TotalHours;
if (totHours <= 24 && totHours >=0)
{
if (xdf.RowCount() > i)
xdf["failure", i] = frow[2];
else//in case two components were failed for the same machine and
//at the same time, add new row with new component name
{
var r = xdf[0].ToList();
r[2] = frow[2];
}
}
}
}
}
return xdf;
});

//Now merge extended failure Df with featureDF
var final_dataframe = finalDf.Merge(failureDfExt, cols, cols,JoinType.Left, "fail");

//define final set of columns
final_dataframe = final_dataframe["datetime", "machineID",
"voltmean_3hrs", "rotatemean_3hrs", "pressuremean_3hrs", "vibrationmean_3hrs",
"voltsd_3hrs", "rotatesd_3hrs", "pressuresd_3hrs", "vibrationsd_3hrs",
"voltmean_24hrs", "rotatemean_24hrs", "pressuremean_24hrs", "vibrationmean_24hrs",
"voltsd_24hrs", "rotatesd_24hrs", "pressuresd_24hrs", "vibrationsd_24hrs",
"error1count", "error2count", "error3count", "error4count", "error5count",
"sincelastcomp1", "sincelastcomp2", "sincelastcomp3", "sincelastcomp4",
"model", "age", "failure_fail"];

//rename column
final_dataframe.Rename(("failure_fail", "failure"));

//save the file data frame to disk
DataFrame.ToCsv("data/final_dataFrame.csv",final_dataframe);


Final Data Frame

Lets see how the final_dataframe looks like. It contains 24 columns. Most of the columns are numerical. The Model column is categorical and it should be encoded once we prepare the machine learning part.

Also the label column failure is categorical column containing 5 different categories: none, comp1, comp2, comp3 and comp4. We can also see the data set is not balance, since we have 2785705 none and the rest of the rows in total of 5923 other categories. This is typical unbalanced dataset, and we should be careful when evaluation models, because the model which returns always none value will have more than 97% of accuracy.

final_dataframe.Describe(false)


In the next part, we are going to implement the training and evaluation process of the Predictive Maintenance model. The full notebook for this blog post can be found here

In depth LSTM Implementation using CNTK on .NET platform

In this blog post the implementation of the LSTM recurrent neural network in CNTK will be shown in detail. The implementation will cover LSTM implementation based on Hochreiter & Schmidhuber (1997) paper which can be found here here. The great blog post about LSTM can also be found at colah’s blog, that explains in details the structure of the LSTM cell, as well as some of the most used LSTM variants. In this blog post the LSTM recurrent network will be implemented using CNTK, a deep learning tool using C# programming language and .NET Core platform. Also in case you want to see how to use pure C# without any additional library for LSTM implementation, you can see great MSDN article: Test Run – Understanding LSTM Cells Using C# By James McCaffrey.

Whole implementation of LSTM RNN is part of ANNdotNET – deep learning tool on .NET platform. More information about the project can be found at GitHub Project page: github.com/bhrnjica/anndotnet.

Introduction to LSTM Recurrent Network

Classic neural networks are built on the fact that data don’t have any order when entering into the network, and the output depend only on the input features. In case when the output depends on features and previous outputs, the classic feed forward neural network cannot help. The solution for such problem may be neural network which can be recursively provides the previous outputs. This kind of network is called recurrent neural network RNN, and it was introduced by the Hopfields in the 1980s, and later popularized when the back-propagation algorithm was improved in the beginning of 1990s. Simple concept of the recurrent neural network can be shown on the following image.

The current output of the recurrent network is defined by the current input Xt, and also on states related on the previous network outputs ht-1, ht-2,

The concept of the recurrent neural network is simple and easy to implement, but the problem raises during the training phase due to unpredictable gradient behavior. During training phase, gradient problem of neural network can be summarized in two categories: the vanishing and the exploding gradient.

The recurrent neural network is based on back-propagation algorithm, specially developed for the recurrent ANN, which is called back-propagation through time, BPTT.  In vanishing gradient problem parameters updates are proportional to the gradient of the error, which in most cases negligibly small, and results that the corresponding weights are constant and stop the network from further training.

On the other hand, exploding gradient problem refers to the opposite behavior, where the updates of weights (gradient of the cost function) became large in each back-propagation step. This problem is caused by the explosion of the long-term components in the recurrent neural network.

The solution to the above problems is specific design of there current network called Long Short-Term Memory, LSTM. One of the main advantages of the LSTM is that it can provide a constant error flow. In order to provide a constant error flow, the LSTM cell contains set of memory blocks,which have the ability to store the temporal state of the network. The LSTM also has special multiplicative units called gates that control the information flow.

The LSTM cell consists of:

• input gate – which controls the flow of the input activations into the memory cell,
• output gate which controls the output flow of the cell activation.
• forget gate, which filters the information from the input and previous output and decides which one should be remembered or forgot and dropped out.

Besides three gates the LSTM cell contains cell update which is usually tanh layer to be part of the cell state.

In each LSTM cell the three variables are coming into the cell:

• the current input xt,
• previous output ht-1 and
• previous cell state ct-1.

On the other hand, from each LSTM cell two variables are getting out:

• the current output ht and
• the current cell state ct.

Graphical representation of the LSTM cell is shown on the following image.

In order to implement LSTM recurrent network, first the LSTM cell should be implemented. The LSTM cell has three gates, and two internal states, which should be determined in order to calculate the current output and current cell state.

The LSTM cell can be define as neural network where the input vector $x=\left(x_1,x_2,x_3,\ldots x_t\right)$  in time $t$, maps to the output vector $y=\left(y_1,\ y_2,\ \ldots,y_m\right)$, through the calculation of the following layers:

• the forget gate sigmoid layer for the time t, ft  is calculated by the previous output ht-1 the input vector xt, and the matrix of weights from the forget layer Wf with addition of corresponded bias bi:

$f_t=\sigma\left(W_f\bullet\left[h_{t-1},x_t\right]+b_f\right).$

• the input gate sigmoid layer for the time t, it  is calculated by the previous output ht-1 the input vector xt, and the matrix of weights from the input layer Wi with addition of corresponded bias bi:

$i_t=\sigma\left(W_i\bullet\left[h_{t-1},x_t\right]+b_i\right).$

• the cell state in time t, Ct  is calculated from the forget gate
ft  and the previous cell state Ct-1. The result is summed wth the input gate it and the cell update state ${\widetilde{c}}_t$, that is tanh layer calculated by the previous output ht-1 the input vector xt, and the weight matrix for the cell with addition of corresponded bias bi:

$C_t=f_t\ \otimes C_{t-1}+\ i_t\otimes\tanh{\left(\ W_C\bullet\left[h_{t-1},x_t\right]+b_C\right).}$

• the output gate sigmoid layer for the time t, ot is calculated by the previous output ht-1, the input vector xt, and the matrix of weights from the output layer Wo with addition of corresponded bias bi:

$o_t=\sigma\left(\ W_0\bullet\left[h_{t-1},x_t\right]+b_0\right).$

The final stage of the LSTM cell is current output ht calculation. The current output  is calculated with the multiplication operation $\otimes$ between output gate layer and tanh layer of the current cell state Ct .

$h_t=o_t\otimes\tanh{\left(C_t\right)}.$

The current output ht, has passed through the network as the previous state for the next LSTM cell, or as the input for neural network output layer.

LSTM with Peephole connection

One of the LSTM variant which is implemented in python based CNTK is LSTM with peephole connection which is first introduced by Gers & Schmidhuber (2000). LSTM with peephole connection let each gate (forget, input and output) look at the cell state.

Now the gates with peephole connection can be expressed sothat the started terms of each gates are extended with additional matrix of Ct.So, the forget gate with peephole can be expressed:

$f_t=\sigma\left(W_f\bullet\left[{C_{t-1},\ h}_{t-1},x_t\right]+b_f\right).$

Similarly, the input gate and the output gate with peephole connection are expressed as:

$i_t=\sigma\left(W_i\bullet\left[C_{t-1},\ h_{t-1},x_t\right]+b_i\right)$,

$o_t=\sigma\left(\ W_0\bullet\left[{C_{t-1},h}_{t-1},x_t\right]+b_0\right)$.

With peephole connection LSTM cell get additional matrix for each gate and the number of LSTM parameters are increased by additional 3mXm parameters, where m – is output dimension.

Implementation of LSTM Recurrent network

The CNTK is Microsoft open source library for deep learning written in C++, but it can be run from various programming languages: Python,C#, R, Java. In order to use the library in C#, the CNTK related Nugget package has to be installed, and the project must be built for 64bit architecture.

1. So open Visual Studio 2017 and create simple .NET Core Console application.
2. Then install CNTK GPU Nugget package to your recently created console application.

Once the startup project is created the LSTM CNTK implementation can be started.

Implementation of the LSTM Cell

As stated previously the implementation presented in this blog post is  originally implemented in ANNdotNET – open source project for deep learning on .NET platform. It can be found at official GitHub project page

The LSTM recurrent network starts by implementation of the LSTMCell class. The LSTMCell class is derived from the NetworkFoundation class which implements basic neural network operations. The Basic operations are implemented through the implementation of the following methods:

• Bias – bias parameters implementation
• Weights – implementation of the weights parameters
• Layer – implementation of the classic fully connected linear layer
• AFunction – applying activation function on the layer.

NetworkFoundation class is shown in the next code snippet

///////////////////////////////////////////////////////////////////////////////////////////
// ANNdotNET - Deep Learning Tool on .NET Platform
// Copyright 2017-2018 Bahrudin Hrnjica                                                                                                                                       //
// This code is free software under the MIT License                                     //
//
// Bahrudin Hrnjica
// bhrnjica@hotmail.com
// Bihac, Bosnia and Herzegovina                                                         //
// http://bhrnjica.net
//////////////////////////////////////////////////////////////////////////////////////////
using CNTK;
using NNetwork.Core.Common;

namespace NNetwork.Core.Network
{
public class NetworkFoundation
{

public Variable Layer(Variable x, int outDim, DataType dataType, DeviceDescriptor device, uint seed = 1 , string name="")
{
var b = Bias(outDim, dataType, device);
var W = Weights(outDim, dataType, device, seed, name);

var Wx = CNTKLib.Times(W, x, name+"_wx");
var l = CNTKLib.Plus(b,Wx, name);

return l;
}

public Parameter Bias(int nDimension, DataType dataType, DeviceDescriptor device)
{
//initial value
var initValue = 0.01;
NDShape shape = new int[] { nDimension };
var b = new Parameter(shape, dataType, initValue, device, "_b");
//
return b;
}

public Parameter Weights(int nDimension, DataType dataType, DeviceDescriptor device, uint seed = 1, string name = "")
{
//initializer of parameter
var glorotI = CNTKLib.GlorotUniformInitializer(1.0, 1, 0, seed);
//create shape the dimension is partially known
NDShape shape = new int[] { nDimension, NDShape.InferredDimension };
var w = new Parameter(shape, dataType, glorotI, device, name=="" ? "_w" : name);
//
return w;
}

public Function AFunction(Variable x, Activation activation, string outputName="")
{
switch (activation)
{
default:
case Activation.None:
return x;
case Activation.ReLU:
return CNTKLib.ReLU(x, outputName);
case Activation.Softmax:
return CNTKLib.Sigmoid(x, outputName);
case Activation.Tanh:
return CNTKLib.Tanh(x, outputName);
}
}
}}


As can be seen, methods implement basic neural buildingblocks, which can be apply to any network type. Once the NetworkFoundation baseclass is implemented, the LSTM cell class implementation starts by definingthree properties and custom constructor, that is shown in the following code snippet:

///////////////////////////////////////////////////////////////////////////////////////////
// ANNdotNET - Deep Learning Tool on .NET Platform
// Copyright 2017-2018 Bahrudin Hrnjica                                                                                                                                       //
// This code is free software under the MIT License                                     //
//
// Bahrudin Hrnjica
// bhrnjica@hotmail.com
// Bihac, Bosnia and Herzegovina                                                         //
// http://bhrnjica.net
//////////////////////////////////////////////////////////////////////////////////////////
using CNTK;
using NNetwork.Core.Common;

namespace NNetwork.Core.Network.Modules
{
public class LSTM : NetworkFoundation
{
public Variable X { get; set; } //LSTM Cell Input
public Function H { get; set; } //LSTM Cell Output
public Function C { get; set; } //LSTM Cell State

public LSTM(Variable input, Variable dh, Variable dc, DataType dataType, Activation actFun, bool usePeephole, bool useStabilizer, uint seed, DeviceDescriptor device)
{
//create cell state
var c = CellState(input, dh, dc, dataType, actFun, usePeephole, useStabilizer, device, ref seed);

//create output from input and cell state
var h = CellOutput(input, dh, c, dataType, device, useStabilizer, usePeephole, actFun, ref seed);

//initialize properties
X = input;
H = h;
C = c;
}


Properties X, H and C, hold current values of the LSTM cell,once the LSTM object is created. The LSTM constructor takes several arguments:

• the first three are variables for the input, previous output and previous cell state;
• the activation function of the cell update layer.

The constructor also contains two arguments for creation a different LSTM variant: peepholes, and self-stabilization, and few other self-explained arguments. The LSTM constructor creates cell state and output by calling CellState and CellOutput methods respectively. Thei mplementation of those methods is shown on the next code snippet:

public Function CellState(Variable x, Variable ht_1, Variable ct_1, DataType dataType,
Activation activationFun, bool usePeephole, bool useStabilizer, DeviceDescriptor device, ref uint seed)
{
var ft = AGate(x, ht_1, ct_1, dataType, usePeephole, useStabilizer, device, ref seed, "ForgetGate");
var it = AGate(x, ht_1, ct_1, dataType, usePeephole, useStabilizer, device, ref seed, "InputGate");
var tan = Gate(x, ht_1, ct_1.Shape[0], dataType, device, ref seed);

//apply Tanh (or other) to gate
var tanH = AFunction(tan, activationFun, "TanHCt_1" );

//calculate cell state
var bft = CNTKLib.ElementTimes(ft, ct_1,"ftct_1");
var bit = CNTKLib.ElementTimes(it, tanH, "ittanH");

//cell state
var ct = CNTKLib.Plus(bft, bit, "CellState");
//
return ct;
}

public Function CellOutput(Variable input, Variable ht_1, Variable ct, DataType dataType, DeviceDescriptor device,
bool useStabilizer, bool usePeephole, Activation actFun ,ref uint seed)
{
var ot = AGate(input, ht_1, ct, dataType, usePeephole, useStabilizer, device, ref seed, "OutputGate");

//apply activation function to cell state
var tanHCt = AFunction(ct, actFun, "TanHCt");

//calculate output
var ht = CNTKLib.ElementTimes(ot, tanHCt,"Output");

//create output layer in case different dimensions between cell and output
var c = ct;
Function h = null;
if (ht.Shape[0] != ct.Shape[0])
{
//rectified dimensions by adding linear layer
var so = !useStabilizer? ct : Stabilizer(ct, device);
var wx_b = Weights(ht_1.Shape[0], dataType, device, seed++);
h = wx_b * so;
}
else
h = ht;

return h;
}


Above methods have been implemented by using previously defined gates and blocks. The method AGate creates LSTM gate. The method is called two times in order to create forget and input gates. Then the Gate method is called in order to create linear layer for the update cell state. The activation function is provided as the constructor argument. Implementation of AGate and Gate functions is shown in the following code snippet:

public Variable AGate(Variable x, Variable ht_1, Variable ct_1, DataType dataType, bool usePeephole,
bool useStabilizer, DeviceDescriptor device, ref uint seed, string name)
{
//cell dimension
int cellDim = ct_1.Shape[0];
//define previous output with stabilization of if defined
var h_prev = !useStabilizer ? ht_1 : Stabilizer(ht_1, device);

//create linear gate
var gate = Gate(x, h_prev, cellDim, dataType, device, ref seed);
if (usePeephole)
{
var c_prev = !useStabilizer ? ct_1 : Stabilizer(ct_1, device);
gate = gate + Peep(c_prev, dataType, device, ref seed);
}
//create forget gate
var sgate = CNTKLib.Sigmoid(gate, name);
return sgate;
}

private Variable Gate(Variable x, Variable hPrev, int cellDim,
DataType dataType, DeviceDescriptor device, ref uint seed)
{
//create linear layer
var xw_b = Layer(x, cellDim, dataType, device, seed++);
var u = Weights(cellDim, dataType, device, seed++,"_u");
//
var gate = xw_b + (u * hPrev);
return gate;
}


As can be seen AGate calls the Gate method in order to create linear layer, and then apply the activation function.

In order to create LSTM variant with peephole connection, as well as LSTM with self-stabilization, two additional methods are implemented. The peephole connection is explained previously. The implementation of Stabilizer methods is based on the implementation found at C# examples on the CNTK github page, with minor modification and re-factorization.

internal Variable Stabilizer(Variable x, DeviceDescriptor device)
{
//define floating number
var f = Constant.Scalar(4.0f, device);

//make inversion of prev. value
var fInv = Constant.Scalar(f.DataType, 1.0 / 4.0f);

//create value of 1/f*ln (e^f-1)
double initValue = 0.99537863;

//create param with initial value
var param = new Parameter(new NDShape(), f.DataType, initValue, device, "_stabilize");

//make exp of product scalar and parameter
var expValue = CNTKLib.Exp(CNTKLib.ElementTimes(f, param));

//
var cost = Constant.Scalar(f.DataType, 1.0) + expValue;

var log = CNTKLib.Log(cost);

var beta = CNTKLib.ElementTimes(fInv, log);

//multiplication of the variable layer with constant scalar beta
var finalValue = CNTKLib.ElementTimes(beta, x);

return finalValue;
}

internal Function Peep(Variable cstate, DataType dataType, DeviceDescriptor device, ref uint seed)
{
//initial value
var initValue = CNTKLib.GlorotUniformInitializer(1.0, 1, 0, seed);

//create shape which for bias should be 1xn
NDShape shape = new int[] { cstate.Shape[0] };

var bf = new Parameter(shape, dataType, initValue, device, "_peep");

var peep = CNTKLib.ElementTimes(bf, cstate);
return peep;
}


The Peep method is based on previous description in the blog post, that simply adds the additional set of parameters which includes the previous cell state into Gates.

Implementation of the LSTM Recurrent Network

Once we have the LSTM cell implementation it is easy toimplement recurrent network based on LSTM. Previously the LSTM is defined withthree input variables: input and two previous state variables. Those previous states should be defined not as real variables but as placeholders, and should be changed dynamically for each iteration. So, the recurrent network starts by defining placeholders of previous output and previous cell state. Then the LSTMcell object is created. Once the the LSTM is created, the actual values is replaced by the previous values by calling the CNTK method PastValue. Then the placeholders are replaced with the past values of the variables. At the end the method return the CNTK Function object, which can be one of two cases, which is controlled by the returnSequence argument:

• first case where the method returns the full sequence,
• second case where the methods return the last element of the sequence.

/////////////////////////////////////////////////////////////////////////////////////////
// ANNdotNET - Deep Learning Tool on .NET Platform
//
// This code is free software under the MIT License
//
// Bahrudin Hrnjica
// bhrnjica@hotmail.com
// Bihac, Bosnia and Herzegovina
// http://bhrnjica.net                                                                  //
////////////////////////////////////////////////////////////////////////////////////////
using CNTK;
using NNetwork.Core.Common;
using NNetwork.Core.Network.Modules;
using System;
using System.Collections.Generic;

namespace NNetwork.Core.Network
{
public class RNN
{
public static Function RecurrenceLSTM(Variable input, int outputDim, int cellDim, DataType dataType, DeviceDescriptor device, bool returnSequence=false,
Activation actFun = Activation.TanH, bool usePeephole = true, bool useStabilizer = true, uint seed = 1)
{
if (outputDim &lt;= 0 || cellDim &lt;= 0)
throw new Exception("Dimension of LSTM cell cannot be zero.");
//prepare output and cell dimensions
NDShape hShape = new int[] { outputDim };
NDShape cShape = new int[] { cellDim };

//create placeholders
//Define previous output and previous cell state as placeholder which will be replace with past values later
var dh = Variable.PlaceholderVariable(hShape, input.DynamicAxes);
var dc = Variable.PlaceholderVariable(cShape, input.DynamicAxes);

//create lstm cell
var lstmCell = new LSTM(input, dh, dc, dataType, actFun, usePeephole, useStabilizer, seed, device);

//get actual values of output and cell state
var actualDh = CNTKLib.PastValue(lstmCell.H);
var actualDc = CNTKLib.PastValue(lstmCell.C);

// Form the recurrence loop by replacing the dh and dc placeholders with the actualDh and actualDc
lstmCell.H.ReplacePlaceholders(new Dictionary&lt;Variable, Variable&gt; { { dh, actualDh }, { dc, actualDc } });

//return value depending of type of LSTM layer
if (returnSequence)
return lstmCell.H;
else
return CNTKLib.SequenceLast(lstmCell.H);

}
}}


As can be seen, the RNN class contains only one static method, which return the CNTK Function object which contains the recurrent network with LSTM cell. The method takes several arguments: input variable, dimension of the output of the recurrent network, dimension of the LSTM cell, and the additional arguments for creation different variants of the LSTM cell.

Implementation of Test Application

Now that the full LSTM based recurrent network is implemented, we are going to provide the test application that can test basic LSTM functionality. The application contains two test methods in order to check:

• number of LSTM parameters, and
• output and cell states of the LSTM cell for two iterations.

Testing the correct number of the parameters

The first method implements validation of the correct numberof LSTM parameters. The LSTM cell has three kinds of matrices: U and W and bfor each LSTM component: forget, input and output gate, and cell update.
Let assume the number of input dimension is n,and the number of output is m. Also let assume that dimension number of the cell is equal to output dimension. We can defined the following matrices:

• U matrix with dimensions of mxn
• W matrix with dimensions of mxm
• B matrix (vector) with dimensions 1xm

In total the LSTM has $P_{\left(LSTM\right)}=4\bullet\left(m^2+m\bullet n+m\right)$.

In case the LSTM has peephole connection the the number of parameters is increased with additional C matrix with 1xm parameters.

In total the LSTM with peephole connection has $P_{\left(LSTM\right)}=4\bullet\left(m^2+m\bullet n+m\right)+3\bullet1\bullet m$. The test method is implemented for n=3, and m=4, so the total number of parameters for default LSTM cell is P(n)=4(9+6+3)=4*18=72. With peephole connection the LSTM cell has P(n)= 4(9+6+3)+3*1*4 = 4*18+3*16 = 72+12=84.

In case the LSTM cell is defined with self-stabilization parameter, the additional 4xm parameters are defined.

Now that we defined parameter number for pure LSTM, with peephole and self-stabilization, we can implement test methods based on n=3 and m=4:

[TestMethod]
public void LSTM_Test_Params_Count()
{
//define values, and variables
Variable x = Variable.InputVariable(new int[] { 3 }, DataType.Float, "input");
Variable y = Variable.InputVariable(new int[] { 4 }, DataType.Float, "output");

//Number of LSTM parameters
var lstm1 = RNN.RecurrenceLSTM(x,4,4, DataType.Float,device, Activation.Tanh,true,true,1);

var ft = lstm1.Inputs.Where(l=&gt;l.Uid.StartsWith("Parameter")).ToList();
var consts = lstm1.Inputs.Where(l =&gt; l.Uid.StartsWith("Constant")).ToList();
var inp = lstm1.Inputs.Where(l =&gt; l.Uid.StartsWith("Input")).ToList();

//bias params
var bs = ft.Where(p=&gt;p.Name.Contains("_b")).ToList();
var totalBs = bs.Sum(v =&gt; v.Shape.TotalSize);
Assert.AreEqual(totalBs,12);
//weights
var ws = ft.Where(p =&gt; p.Name.Contains("_w")).ToList();
var totalWs = ws.Sum(v =&gt; v.Shape.TotalSize);
Assert.AreEqual(totalWs, 24);
//update
var us = ft.Where(p =&gt; p.Name.Contains("_u")).ToList();
var totalUs = us.Sum(v =&gt; v.Shape.TotalSize);
Assert.AreEqual(totalUs, 36);

var totalOnly = totalBs + totalWs + totalUs;
var totalWithSTabilize = totalOnly + totalst;
var totalWithPeep = totalOnly + totalPh;

var totalP = totalOnly + totalst + totalPh;
var totalParams = ft.Sum(v=&gt;v.Shape.TotalSize);
Assert.AreEqual(totalP,totalParams);
}


Testing the output and cell state values

In this test the network parameters input, previous output and cell states are setup. The result of this test is weather the LSTM cell returns correct output and cell state values for first and second iteration. The implementation of this test is shows on the following code snippet:

public void LSTM_Test_WeightsValues()
{

//define values, and variables
Variable x = Variable.InputVariable(new int[] { 2 }, DataType.Float, "input");
Variable y = Variable.InputVariable(new int[] { 3 }, DataType.Float, "output");

//data 01
var x1Values = Value.CreateBatch&lt;float&gt;(new NDShape(1, 2), new float[] { 1f, 2f }, device);
var ct_1Values = Value.CreateBatch&lt;float&gt;(new NDShape(1, 3), new float[] { 0f, 0f, 0f }, device);
var ht_1Values = Value.CreateBatch&lt;float&gt;(new NDShape(1, 3), new float[] { 0f, 0f, 0f }, device);

var y1Values = Value.CreateBatch&lt;float&gt;(new NDShape(1, 3), new float[] { 0.0629f, 0.0878f, 0.1143f }, device);

//data 02
var x2Values = Value.CreateBatch&lt;float&gt;(new NDShape(1, 2), new float[] { 3f, 4f }, device);
var y2Values = Value.CreateBatch&lt;float&gt;(new NDShape(1, 3), new float[] { 0.1282f, 0.2066f, 0.2883f }, device);

//Create LSTM Cell with predefined previous output and prev cell state
Variable ht_1 = Variable.InputVariable(new int[] { 3 }, DataType.Float, "prevOutput");
Variable ct_1 = Variable.InputVariable(new int[] { 3 }, DataType.Float, "prevCellState");
var lstmCell = new LSTM(x, ht_1, ct_1, DataType.Float, Activation.Tanh, false, false, 1, device);

var ft = lstmCell.H.Inputs.Where(l =&gt; l.Uid.StartsWith("Parameter")).ToList();
var pCount = ft.Sum(p =&gt; p.Shape.TotalSize);
var consts = lstmCell.H.Inputs.Where(l =&gt; l.Uid.StartsWith("Constant")).ToList();
var inp = lstmCell.H.Inputs.Where(l =&gt; l.Uid.StartsWith("Input")).ToList();

//bias params
var bs = ft.Where(p =&gt; p.Name.Contains("_b")).ToList();
var pa = new Parameter(bs[0]);
pa.SetValue(new NDArrayView(pa.Shape, new float[] { 0.16f, 0.17f, 0.18f }, device));
var pa1 = new Parameter(bs[1]);
pa1.SetValue(new NDArrayView(pa1.Shape, new float[] { 0.16f, 0.17f, 0.18f }, device));
var pa2 = new Parameter(bs[2]);
pa2.SetValue(new NDArrayView(pa2.Shape, new float[] { 0.16f, 0.17f, 0.18f }, device));
var pa3 = new Parameter(bs[3]);
pa3.SetValue(new NDArrayView(pa3.Shape, new float[] { 0.16f, 0.17f, 0.18f }, device));

//set value to weights parameters
var ws = ft.Where(p =&gt; p.Name.Contains("_w")).ToList();
var ws0 = new Parameter(ws[0]);
var ws1 = new Parameter(ws[1]);
var ws2 = new Parameter(ws[2]);
var ws3 = new Parameter(ws[3]);
(ws0).SetValue(new NDArrayView(ws0.Shape, new float[] { 0.01f, 0.03f, 0.05f, 0.02f, 0.04f, 0.06f }, device));
(ws1).SetValue(new NDArrayView(ws1.Shape, new float[] { 0.01f, 0.03f, 0.05f, 0.02f, 0.04f, 0.06f }, device));
(ws2).SetValue(new NDArrayView(ws2.Shape, new float[] { 0.01f, 0.03f, 0.05f, 0.02f, 0.04f, 0.06f }, device));
(ws3).SetValue(new NDArrayView(ws3.Shape, new float[] { 0.01f, 0.03f, 0.05f, 0.02f, 0.04f, 0.06f }, device));

//set value to update parameters
var us = ft.Where(p =&gt; p.Name.Contains("_u")).ToList();
var us0 = new Parameter(us[0]);
var us1 = new Parameter(us[1]);
var us2 = new Parameter(us[2]);
var us3 = new Parameter(us[3]);
(us0).SetValue(new NDArrayView(us0.Shape, new float[] {  0.07f, 0.10f, 0.13f, 0.08f, 0.11f, 0.14f, 0.09f, 0.12f, 0.15f }, device));
(us1).SetValue(new NDArrayView(us1.Shape, new float[] {  0.07f, 0.10f, 0.13f, 0.08f, 0.11f, 0.14f, 0.09f, 0.12f, 0.15f }, device));
(us2).SetValue(new NDArrayView(us2.Shape, new float[] {  0.07f, 0.10f, 0.13f, 0.08f, 0.11f, 0.14f, 0.09f, 0.12f, 0.15f }, device));
(us3).SetValue(new NDArrayView(us3.Shape, new float[] {  0.07f, 0.10f, 0.13f, 0.08f, 0.11f, 0.14f, 0.09f, 0.12f, 0.15f }, device));

//evaluate
//Evaluate model after weights are setup
var inV = new Dictionary&lt;Variable, Value&gt;();

//evaluate output when previous values are zero
var outV11 = new Dictionary&lt;Variable, Value&gt;();
lstmCell.H.Evaluate(inV, outV11, device);

//test  result values
var result = outV11[lstmCell.H].GetDenseData&lt;float&gt;(lstmCell.H);
Assert.AreEqual(result[0][0], 0.06286034f);//
Assert.AreEqual(result[0][1], 0.0878196657f);//
Assert.AreEqual(result[0][2], 0.114274308f);//

//evaluate cell state
var outV = new Dictionary&lt;Variable, Value&gt;();
lstmCell.C.Evaluate(inV, outV, device);

var resultc = outV[lstmCell.C].GetDenseData&lt;float&gt;(lstmCell.C);
Assert.AreEqual(resultc[0][0], 0.114309229f);//
Assert.AreEqual(resultc[0][1], 0.15543206f);//
Assert.AreEqual(resultc[0][2], 0.197323829f);//

//evaluate second value, with previous values as previous state
//setup previous state and output
ct_1Values = Value.CreateBatch&lt;float&gt;(new NDShape(1, 3), new float[] { resultc[0][0], resultc[0][1], resultc[0][2] }, device);
ht_1Values = Value.CreateBatch&lt;float&gt;(new NDShape(1, 3), new float[] { result[0][0], result[0][1], result[0][2] }, device);

//Prepare for the evaluation
inV = new Dictionary&lt;Variable, Value&gt;();

outV11 = new Dictionary&lt;Variable, Value&gt;();
lstmCell.H.Evaluate(inV, outV11, device);

//test  result values
result = outV11[lstmCell.H].GetDenseData&lt;float&gt;(lstmCell.H);
Assert.AreEqual(result[0][0], 0.128203377f);//
Assert.AreEqual(result[0][1], 0.206633776f);//
Assert.AreEqual(result[0][2], 0.288335562f);//

//evaluate cell state
outV = new Dictionary&lt;Variable, Value&gt;();