Neural Network Regression From Scratch With Batch Training Using C#

A few days ago I brushed off some old code that implements a neural network regression system, from scratch, using C#. For simplicity, I used online training where model weights and biases are updated after every training item.

While I was walking my dogs and thinking, I realized that I had forgotten to write a batch (also called mini-batch) training version, where gradients are accumulated and weights and biases are updated after every batch of training items, which is more flexible. So, when my dogs finished walking me, I implemented a batch training method.

Note: Code updated on 02/07/2024.

I used one of my standard synthetic datasets where the goal is to predict a person’s income from their sex, age, State, and political leaning. The data looks like:

1, 0.24, 1, 0, 0, 0.2950, 0, 0, 1
0, 0.39, 0, 0, 1, 0.5120, 0, 1, 0
1, 0.63, 0, 1, 0, 0.7580, 1, 0, 0
0, 0.36, 1, 0, 0, 0.4450, 0, 1, 0
. . .

The fields are sex (male = 0, female = 1), age (divided by 100), State (Michigan = 100, Nebraska = 010, Oklahoma = 001), income (divided by 100,000), and political leaning (conservative = 100, moderate = 010, liberal = 001). There are 200 training items and 40 test items.

My from-scratch neural network has a single hidden layer — it’s possible but not practical to implement deep architectures (more than one hidden layer) from scratch. In high-level pseudo-code, batch training is:

// n = 200; bs = 10
// batches per epoch = 200 / 10 = 20
// for epoch = 0; epoch "lt" maxEpochs; ++epoch
//   shuffle indices
//   for batch = 0; batch "lt" bpe; ++batch
//     for item = 0; item "lt" bs; ++item
//       compute output
//       accum grads
//     end-item
//     update weights
//     zero-out grads
//   end-batches
// end-epochs

Implementing a neural network from scratch is a lot of work but I have done this many, many times over the past few decades and so the process was quick and relatively easy. And fun!



Online training and batch training are essentially equivalent (well, mostly). Standard six-sided dice and Sicherman dice are essentially equivalent. If you toss a pair of standard dice and calculate their sum, you get a value from 2 to 12. The highest probability is getting a sum of 7 — there are six ways that can happen. The first die of a pair of Sicherman dice has sides (1, 2, 2, 3, 3, 4). The second die of the pair has (1, 3, 4, 5, 6, 8). If you toss a pair of Sicherman dice, you get the same possible outcomes (2-12) with the same probabilities as standard dice. Neat!


Demo code. Replace “lt” (less than), “gt”, “lte”, “gte” with Boolean operator symbols. The training and test data are at the bottom of this post.

Update: My original post had a bug that was spotted by reader Thorsten Kleppe. See the TrainBatch() method. Thanks Thorsten! Greatly appreciate your efforts!

using System;
using System.IO;

namespace NeuralNetworkRegression
{
  internal class NeuralRegressionProgram
  {
    static void Main(string[] args)
    {
      Console.WriteLine("\nNeural network " +
        "regression C# ");

      Console.WriteLine("Predict income from sex," +
        " age, State, political leaning ");

      Console.WriteLine("\nLoading train and" +
        " test data from file ");

      string trainFile =
        "..\\..\\..\\Data\\people_train.txt";
      // sex, age,  State,  income,  politics
      //  0   0.32  1 0 0   0.65400  0 0 1
      double[][] trainX = Utils.MatLoad(trainFile,
        new int[] { 0, 1, 2, 3, 4, 6, 7, 8 }, ',', "#");
      double[] trainY =
        Utils.MatToVec(Utils.MatLoad(trainFile,
        new int[] { 5 }, ',', "#"));

      string testFile =
        "..\\..\\..\\Data\\people_test.txt";
      double[][] testX = Utils.MatLoad(testFile,
        new int[] { 0, 1, 2, 3, 4, 6, 7, 8 }, ',', "#");
      double[] testY =
        Utils.MatToVec(Utils.MatLoad(testFile,
        new int[] { 5 }, ',', "#"));
      Console.WriteLine("Done ");

      Console.WriteLine("\nFirst three X data: ");
      for (int i = 0; i "lt" 3; ++i)
        Utils.VecShow(trainX[i], 2, 6, true);

      Console.WriteLine("\nFirst three target Y: ");
      for (int i = 0; i "lt" 3; ++i)
        Console.WriteLine(trainY[i].ToString("F5"));

      // ----------------------------------------------------

      Console.WriteLine("\nCreating 8-100-1 tanh()" +
        " identity() neural network ");
      NeuralNetwork nn =
        new NeuralNetwork(8, 100, 1, seed: 0);
      Console.WriteLine("Done ");

      // batch training params
      Console.WriteLine("\nPreparing train parameters ");
      int maxEpochs = 200;
      double lrnRate = 0.01;
      int batSize = 10;

      Console.WriteLine("\nmaxEpochs = " +
        maxEpochs);
      Console.WriteLine("lrnRate = " +
        lrnRate.ToString("F3"));
      Console.WriteLine("batSize = " + batSize);

      Console.WriteLine("\nStarting (batch) training ");
      nn.TrainBatch(trainX, trainY, lrnRate,
        batSize, maxEpochs);
      Console.WriteLine("Done ");

      Console.WriteLine("\nEvaluating model ");
      double trainAcc = nn.Accuracy(trainX, trainY, 0.10);
      Console.WriteLine("Accuracy (10%) on train data = " +
        trainAcc.ToString("F4"));

      double testAcc = nn.Accuracy(testX, testY, 0.10);
      Console.WriteLine("Accuracy (10%) on test data  = " +
        testAcc.ToString("F4"));

      Console.WriteLine("\nPredicting income for male" +
        " 34 Oklahoma moderate ");
      double[] X = new double[] { 0, 0.34, 0, 0, 1, 0, 1, 0 };
      double y = nn.ComputeOutput(X);
      Console.WriteLine("Predicted income = " +
        y.ToString("F5"));

      //// save trained model wts
      //Console.WriteLine("\nSaving model wts to file ");
      //string fn = "..\\..\\..\\Models\\people_wts.txt";
      //nn.SaveWeights(fn);
      //Console.WriteLine("Done ");

      //// load saved wts later 
      //Console.WriteLine("\nLoading saved wts to new NN ");
      //NeuralNetwork nn2 = new NeuralNetwork(8, 100, 1, 0);
      //nn2.LoadWeights(fn);
      //Console.WriteLine("Done ");

      //Console.WriteLine("\nPredicting income for male" +
      //  " 34 Oklahoma moderate ");
      //double y2 = nn2.ComputeOutput(X);
      //Console.WriteLine("Predicted income = " +
      //  y2.ToString("F5"));

      Console.WriteLine("\nEnd demo ");
      Console.ReadLine();
    } // Main

  } // Program

  // --------------------------------------------------------

  public class NeuralNetwork
  {
    private int ni; // number input nodes
    private int nh;
    private int no;

    private double[] iNodes;
    private double[][] ihWeights; // input-hidden
    private double[] hBiases;
    private double[] hNodes;

    private double[][] hoWeights; // hidden-output
    private double[] oBiases;
    private double[] oNodes;  // single val as array

    // gradients
    private double[][] ihGrads;
    private double[] hbGrads;
    private double[][] hoGrads;
    private double[] obGrads;

    private Random rnd;

    // ------------------------------------------------------

    public NeuralNetwork(int numIn, int numHid,
      int numOut, int seed)
    {
      this.ni = numIn;
      this.nh = numHid;
      this.no = numOut;  // 1 for regression

      this.iNodes = new double[numIn];

      this.ihWeights = Utils.MatCreate(numIn, numHid);
      this.hBiases = new double[numHid];
      this.hNodes = new double[numHid];

      this.hoWeights = Utils.MatCreate(numHid, numOut);
      this.oBiases = new double[numOut];  // [1]
      this.oNodes = new double[numOut];  // [1]

      this.ihGrads = Utils.MatCreate(numIn, numHid);
      this.hbGrads = new double[numHid];
      this.hoGrads = Utils.MatCreate(numHid, numOut);
      this.obGrads = new double[numOut];

      this.rnd = new Random(seed);
      this.InitWeights(); // all weights and biases
    } // ctor

    // ------------------------------------------------------

    private void InitWeights() // helper for ctor
    {
      // weights and biases to small random values
      double lo = -0.01; double hi = +0.01;
      int numWts = (this.ni * this.nh) +
        (this.nh * this.no) + this.nh + this.no;
      double[] initialWeights = new double[numWts];
      for (int i = 0; i "lt" initialWeights.Length; ++i)
        initialWeights[i] =
          (hi - lo) * rnd.NextDouble() + lo;
      this.SetWeights(initialWeights);
    }

    // ------------------------------------------------------

    public void SetWeights(double[] wts)
    {
      // copy serialized weights and biases in wts[] 
      // to ih weights, ih biases, ho weights, ho biases
      int numWts = (this.ni * this.nh) +
        (this.nh * this.no) + this.nh + this.no;
      if (wts.Length != numWts)
        throw new Exception("Bad array in SetWeights");

      int k = 0; // points into wts param

      for (int i = 0; i "lt" this.ni; ++i)
        for (int j = 0; j "lt" this.nh; ++j)
          this.ihWeights[i][j] = wts[k++];
      for (int i = 0; i "lt" this.nh; ++i)
        this.hBiases[i] = wts[k++];
      for (int i = 0; i "lt" this.nh; ++i)
        for (int j = 0; j "lt" this.no; ++j)
          this.hoWeights[i][j] = wts[k++];
      for (int i = 0; i "lt" this.no; ++i)
        this.oBiases[i] = wts[k++];
    }

    // ------------------------------------------------------

    public double[] GetWeights()
    {
      int numWts = (this.ni * this.nh) +
        (this.nh * this.no) + this.nh + this.no;
      double[] result = new double[numWts];
      int k = 0;
      for (int i = 0; i "lt" ihWeights.Length; ++i)
        for (int j = 0; j "lt" this.ihWeights[0].Length; ++j)
          result[k++] = this.ihWeights[i][j];
      for (int i = 0; i "lt" this.hBiases.Length; ++i)
        result[k++] = this.hBiases[i];
      for (int i = 0; i "lt" this.hoWeights.Length; ++i)
        for (int j = 0; j "lt" this.hoWeights[0].Length; ++j)
          result[k++] = this.hoWeights[i][j];
      for (int i = 0; i "lt" this.oBiases.Length; ++i)
        result[k++] = this.oBiases[i];
      return result;
    }

    // ------------------------------------------------------

    public double ComputeOutput(double[] x)
    {
      double[] hSums = new double[this.nh]; // scratch 
      double[] oSums = new double[this.no]; // out sums

      for (int i = 0; i "lt" x.Length; ++i)
        this.iNodes[i] = x[i];
      // note: no need to copy x-values unless
      // you implement a ToString.
      // more efficient to simply use the X[] directly.

      // 1. compute i-h sum of weights * inputs
      for (int j = 0; j "lt" this.nh; ++j)
        for (int i = 0; i "lt" this.ni; ++i)
          hSums[j] += this.iNodes[i] *
            this.ihWeights[i][j]; // note +=

      // 2. add biases to hidden sums
      for (int i = 0; i "lt" this.nh; ++i)
        hSums[i] += this.hBiases[i];

      // 3. apply hidden activation
      for (int i = 0; i "lt" this.nh; ++i)
        this.hNodes[i] = HyperTan(hSums[i]);

      // 4. compute h-o sum of wts * hOutputs
      for (int j = 0; j "lt" this.no; ++j)
        for (int i = 0; i "lt" this.nh; ++i)
          oSums[j] += this.hNodes[i] *
            this.hoWeights[i][j];  // [1]

      // 5. add biases to output sums
      for (int i = 0; i "lt" this.no; ++i)
        oSums[i] += this.oBiases[i];

      // 6. apply output activation
      for (int i = 0; i "lt" this.no; ++i)
        this.oNodes[i] = Identity(oSums[i]);

      return this.oNodes[0];  // single value
    }

    // ------------------------------------------------------

    private static double HyperTan(double x)
    {
      if (x "lt" -10.0) return -1.0;
      else if (x "gt" 10.0) return 1.0;
      else return Math.Tanh(x);
    }

    // ------------------------------------------------------

    private static double Identity(double x)
    {
      return x;
    }

    // ------------------------------------------------------

    // ------------------------------------------------------

    private void ZeroOutGrads()
    {
      for (int i = 0; i "lt" this.ni; ++i)
        for (int j = 0; j "lt" this.nh; ++j)
          this.ihGrads[i][j] = 0.0;

      for (int j = 0; j "lt" this.nh; ++j)
        this.hbGrads[j] = 0.0;

      for (int j = 0; j "lt" this.nh; ++j)
        for (int k = 0; k "lt" this.no; ++k)
          this.hoGrads[j][k] = 0.0;

      for (int k = 0; k "lt" this.no; ++k)
        this.obGrads[k] = 0.0;
    }  // ZeroOutGrads()

    // ------------------------------------------------------

    private void AccumGrads(double y)
    {
      double[] oSignals = new double[this.no];
      double[] hSignals = new double[this.nh];

      // 1. compute output node scratch signals 
      for (int k = 0; k "lt" this.no; ++k)
        oSignals[k] = 1 * (this.oNodes[k] - y);

      // 2. accum hidden-to-output gradients 
      for (int j = 0; j "lt" this.nh; ++j)
        for (int k = 0; k "lt" this.no; ++k)
          hoGrads[j][k] +=
           oSignals[k] * this.hNodes[j];

      // 3. accum output node bias gradients
      for (int k = 0; k "lt" this.no; ++k)
        obGrads[k] +=
         oSignals[k] * 1.0;  // 1.0 dummy 

      // 4. compute hidden node signals
      for (int j = 0; j "lt" this.nh; ++j)
      {
        double sum = 0.0;
        for (int k = 0; k "lt" this.no; ++k)
          sum += oSignals[k] * this.hoWeights[j][k];

        double derivative =
          (1 - this.hNodes[j]) *
          (1 + this.hNodes[j]);  // assumes tanh
        hSignals[j] = derivative * sum;
      }

      // 5. accum input-to-hidden gradients
      for (int i = 0; i "lt" this.ni; ++i)
        for (int j = 0; j "lt" this.nh; ++j)
          this.ihGrads[i][j] +=
            hSignals[j] * this.iNodes[i];

      // 6. accum hidden node bias gradients
      for (int j = 0; j "lt" this.nh; ++j)
        this.hbGrads[j] +=
          hSignals[j] * 1.0;  // 1.0 dummy
    } // AccumGrads

    // ------------------------------------------------------

    private void UpdateWeights(double lrnRate)
    {
      // assumes all gradients computed
      // 1. update input-to-hidden weights
      for (int i = 0; i "lt" this.ni; ++i)
      {
        for (int j = 0; j "lt" this.nh; ++j)
        {
          double delta = -1.0 * lrnRate *
            this.ihGrads[i][j];
          this.ihWeights[i][j] += delta;
        }
      }

      // 2. update hidden node biases
      for (int j = 0; j "lt" this.nh; ++j)
      {
        double delta = -1.0 * lrnRate *
          this.hbGrads[j];
        this.hBiases[j] += delta;
      }

      // 3. update hidden-to-output weights
      for (int j = 0; j "lt" this.nh; ++j)
      {
        for (int k = 0; k "lt" this.no; ++k)
        {
          double delta = -1.0 * lrnRate *
            this.hoGrads[j][k];
          this.hoWeights[j][k] += delta;
        }
      }

      // 4. update output node biases
      for (int k = 0; k "lt" this.no; ++k)
      {
        double delta = -1.0 * lrnRate *
          this.obGrads[k];
        this.oBiases[k] += delta;
      }
    } // UpdateWeights()

    // ------------------------------------------------------

    public void TrainBatch(double[][] trainX,
      double[] trainY, double lrnRate, int batSize,
      int maxEpochs)
    {
      int n = trainX.Length;  // 200
      int batchesPerEpoch = n / batSize;  // 20
      int freq = maxEpochs / 10;  // to show progress

      int[] indices = new int[n];
      for (int i = 0; i "lt" n; ++i)
        indices[i] = i;

      // ----------------------------------------------------
      //
      // n = 200; bs = 10
      // batches per epoch = 200 / 10 = 20

      // for epoch = 0; epoch "lt" maxEpochs; ++epoch
      //   shuffle indices
      //   for batch = 0; batch "lt" bpe; ++batch
      //     for item = 0; item "lt" bs; ++item
      //       compute output
      //       accum grads
      //     end-item
      //     update weights
      //     zero-out grads
      //   end-batches
      // end-epochs
      //
      // ----------------------------------------------------

      for (int epoch = 0; epoch "lt" maxEpochs; ++epoch)
      {
        Shuffle(indices);
        int ptr = 0;  // points into indices
        for (int batIdx = 0; batIdx "lt" batchesPerEpoch;
          ++batIdx) // 0, 1, . . 19
        {
          for (int i = 0; i "lt" batSize; ++i) // 0 . . 9
          {
            int ii = indices[ptr++];  // compute output
            double[] x = trainX[ii];
            double y = trainY[ii];
            this.ComputeOutput(x);  // into this.oNoodes
            this.AccumGrads(y);
          }
          this.UpdateWeights(lrnRate);
          this.ZeroOutGrads(); // prep for next batch
        } // batches

        if (epoch % freq == 0)  // progress every few epochs
        {
          double mse = this.Error(trainX, trainY);
          double acc = this.Accuracy(trainX, trainY, 0.10);

          string s1 = "epoch: " + epoch.ToString().PadLeft(4);
          string s2 = "  MSE = " + mse.ToString("F4");
          string s3 = "  acc = " + acc.ToString("F4");
          Console.WriteLine(s1 + s2 + s3);
        }
      } // epoch
    } // Train

    // ------------------------------------------------------

    private void Shuffle(int[] sequence)
    {
      for (int i = 0; i "lt" sequence.Length; ++i)
      {
        int r = this.rnd.Next(i, sequence.Length);
        int tmp = sequence[r];
        sequence[r] = sequence[i];
        sequence[i] = tmp;
        //sequence[i] = i; // for testing
      }
    } // Shuffle

    // ------------------------------------------------------

    public double Error(double[][] trainX, double[] trainY)
    {
      // MSE
      int n = trainX.Length;
      double sumSquaredError = 0.0;
      for (int i = 0; i "lt" n; ++i)
      {
        double predY = this.ComputeOutput(trainX[i]);
        double actualY = trainY[i];
        sumSquaredError += (predY - actualY) *
          (predY - actualY);
      }
      return sumSquaredError / n;
    } // Error

    // ------------------------------------------------------

    public double Accuracy(double[][] dataX,
      double[] dataY, double pctClose)
    {
      // percentage correct using winner-takes all
      int n = dataX.Length;
      int nCorrect = 0;
      int nWrong = 0;
      for (int i = 0; i "lt" n; ++i)
      {
        double predY = this.ComputeOutput(dataX[i]);
        double actualY = dataY[i];
        if (Math.Abs(predY - actualY) "lt"
          Math.Abs(pctClose * actualY))
          ++nCorrect;
        else
          ++nWrong;
      }
      return (nCorrect * 1.0) / (nCorrect + nWrong);
    }

    // ------------------------------------------------------

    public void SaveWeights(string fn)
    {
      FileStream ofs = new FileStream(fn, FileMode.Create);
      StreamWriter sw = new StreamWriter(ofs);

      double[] wts = this.GetWeights();
      for (int i = 0; i "lt" wts.Length; ++i)
        sw.WriteLine(wts[i].ToString("F8")); 
      sw.Close();
      ofs.Close();
    }

    public void LoadWeights(string fn)
    {
      FileStream ifs = new FileStream(fn, FileMode.Open);
      StreamReader sr = new StreamReader(ifs);
      List"lt"double"gt" listWts = new List"lt"double"gt"();
      string line = "";  // one wt per line
      while ((line = sr.ReadLine()) != null)
      {
        // if (line.StartsWith(comment) == true)
        //   continue;
        listWts.Add(double.Parse(line));
      }
      sr.Close();
      ifs.Close();

      double[] wts = listWts.ToArray();
      this.SetWeights(wts);
    }

    // ------------------------------------------------------

  } // NeuralNetwork class

  // --------------------------------------------------------

  public class Utils
  {
    public static double[][] VecToMat(double[] vec,
      int rows, int cols)
    {
      // vector to row vec/matrix
      double[][] result = MatCreate(rows, cols);
      int k = 0;
      for (int i = 0; i "lt" rows; ++i)
        for (int j = 0; j "lt" cols; ++j)
          result[i][j] = vec[k++];
      return result;
    }

    // ------------------------------------------------------

    public static double[][] MatCreate(int rows,
      int cols)
    {
      double[][] result = new double[rows][];
      for (int i = 0; i "lt" rows; ++i)
        result[i] = new double[cols];
      return result;
    }

    // ------------------------------------------------------

    static int NumNonCommentLines(string fn,
      string comment)
    {
      int ct = 0;
      string line = "";
      FileStream ifs = new FileStream(fn,
        FileMode.Open);
      StreamReader sr = new StreamReader(ifs);
      while ((line = sr.ReadLine()) != null)
        if (line.StartsWith(comment) == false)
          ++ct;
      sr.Close(); ifs.Close();
      return ct;
    }

    // ------------------------------------------------------

    public static double[][] MatLoad(string fn,
      int[] usecols, char sep, string comment)
    {
      // count number of non-comment lines
      int nRows = NumNonCommentLines(fn, comment);
      int nCols = usecols.Length;
      double[][] result = MatCreate(nRows, nCols);
      string line = "";
      string[] tokens = null;
      FileStream ifs = new FileStream(fn, FileMode.Open);
      StreamReader sr = new StreamReader(ifs);

      int i = 0;
      while ((line = sr.ReadLine()) != null)
      {
        if (line.StartsWith(comment) == true)
          continue;
        tokens = line.Split(sep);
        for (int j = 0; j "lt" nCols; ++j)
        {
          int k = usecols[j];  // into tokens
          result[i][j] = double.Parse(tokens[k]);
        }
        ++i;
      }
      sr.Close(); ifs.Close();
      return result;
    }

    // ------------------------------------------------------

    public static double[] MatToVec(double[][] m)
    {
      int rows = m.Length;
      int cols = m[0].Length;
      double[] result = new double[rows * cols];
      int k = 0;
      for (int i = 0; i "lt" rows; ++i)
        for (int j = 0; j "lt" cols; ++j)
          result[k++] = m[i][j];

      return result;
    }

    // ------------------------------------------------------

    public static void MatShow(double[][] m,
      int dec, int wid)
    {
      for (int i = 0; i "lt" m.Length; ++i)
      {
        for (int j = 0; j "lt" m[0].Length; ++j)
        {
          double v = m[i][j];
          if (Math.Abs(v) "lt" 1.0e-8) v = 0.0; // hack
          Console.Write(v.ToString("F" +
            dec).PadLeft(wid));
        }
        Console.WriteLine("");
      }
    }

    // ------------------------------------------------------

    public static void VecShow(int[] vec, int wid)
    {
      for (int i = 0; i "lt" vec.Length; ++i)
        Console.Write(vec[i].ToString().PadLeft(wid));
      Console.WriteLine("");
    }

    // ------------------------------------------------------

    public static void VecShow(double[] vec,
      int dec, int wid, bool newLine)
    {
      for (int i = 0; i "lt" vec.Length; ++i)
      {
        double x = vec[i];
        if (Math.Abs(x) "lt" 1.0e-8) x = 0.0;
        Console.Write(x.ToString("F" +
          dec).PadLeft(wid));
      }
      if (newLine == true)
        Console.WriteLine("");
    }

  } // Utils class

} // ns

Training data:

# people_train.txt
#
# sex (0 = male, 1 = female), age / 100,
# state (michigan = 100, nebraska = 010,
# oklahoma = 001),
# income / 100_000,
# politics (conservative = 100,
# moderate = 010, liberal = 001)
#
1, 0.24, 1, 0, 0, 0.2950, 0, 0, 1
0, 0.39, 0, 0, 1, 0.5120, 0, 1, 0
1, 0.63, 0, 1, 0, 0.7580, 1, 0, 0
0, 0.36, 1, 0, 0, 0.4450, 0, 1, 0
1, 0.27, 0, 1, 0, 0.2860, 0, 0, 1
1, 0.50, 0, 1, 0, 0.5650, 0, 1, 0
1, 0.50, 0, 0, 1, 0.5500, 0, 1, 0
0, 0.19, 0, 0, 1, 0.3270, 1, 0, 0
1, 0.22, 0, 1, 0, 0.2770, 0, 1, 0
0, 0.39, 0, 0, 1, 0.4710, 0, 0, 1
1, 0.34, 1, 0, 0, 0.3940, 0, 1, 0
0, 0.22, 1, 0, 0, 0.3350, 1, 0, 0
1, 0.35, 0, 0, 1, 0.3520, 0, 0, 1
0, 0.33, 0, 1, 0, 0.4640, 0, 1, 0
1, 0.45, 0, 1, 0, 0.5410, 0, 1, 0
1, 0.42, 0, 1, 0, 0.5070, 0, 1, 0
0, 0.33, 0, 1, 0, 0.4680, 0, 1, 0
1, 0.25, 0, 0, 1, 0.3000, 0, 1, 0
0, 0.31, 0, 1, 0, 0.4640, 1, 0, 0
1, 0.27, 1, 0, 0, 0.3250, 0, 0, 1
1, 0.48, 1, 0, 0, 0.5400, 0, 1, 0
0, 0.64, 0, 1, 0, 0.7130, 0, 0, 1
1, 0.61, 0, 1, 0, 0.7240, 1, 0, 0
1, 0.54, 0, 0, 1, 0.6100, 1, 0, 0
1, 0.29, 1, 0, 0, 0.3630, 1, 0, 0
1, 0.50, 0, 0, 1, 0.5500, 0, 1, 0
1, 0.55, 0, 0, 1, 0.6250, 1, 0, 0
1, 0.40, 1, 0, 0, 0.5240, 1, 0, 0
1, 0.22, 1, 0, 0, 0.2360, 0, 0, 1
1, 0.68, 0, 1, 0, 0.7840, 1, 0, 0
0, 0.60, 1, 0, 0, 0.7170, 0, 0, 1
0, 0.34, 0, 0, 1, 0.4650, 0, 1, 0
0, 0.25, 0, 0, 1, 0.3710, 1, 0, 0
0, 0.31, 0, 1, 0, 0.4890, 0, 1, 0
1, 0.43, 0, 0, 1, 0.4800, 0, 1, 0
1, 0.58, 0, 1, 0, 0.6540, 0, 0, 1
0, 0.55, 0, 1, 0, 0.6070, 0, 0, 1
0, 0.43, 0, 1, 0, 0.5110, 0, 1, 0
0, 0.43, 0, 0, 1, 0.5320, 0, 1, 0
0, 0.21, 1, 0, 0, 0.3720, 1, 0, 0
1, 0.55, 0, 0, 1, 0.6460, 1, 0, 0
1, 0.64, 0, 1, 0, 0.7480, 1, 0, 0
0, 0.41, 1, 0, 0, 0.5880, 0, 1, 0
1, 0.64, 0, 0, 1, 0.7270, 1, 0, 0
0, 0.56, 0, 0, 1, 0.6660, 0, 0, 1
1, 0.31, 0, 0, 1, 0.3600, 0, 1, 0
0, 0.65, 0, 0, 1, 0.7010, 0, 0, 1
1, 0.55, 0, 0, 1, 0.6430, 1, 0, 0
0, 0.25, 1, 0, 0, 0.4030, 1, 0, 0
1, 0.46, 0, 0, 1, 0.5100, 0, 1, 0
0, 0.36, 1, 0, 0, 0.5350, 1, 0, 0
1, 0.52, 0, 1, 0, 0.5810, 0, 1, 0
1, 0.61, 0, 0, 1, 0.6790, 1, 0, 0
1, 0.57, 0, 0, 1, 0.6570, 1, 0, 0
0, 0.46, 0, 1, 0, 0.5260, 0, 1, 0
0, 0.62, 1, 0, 0, 0.6680, 0, 0, 1
1, 0.55, 0, 0, 1, 0.6270, 1, 0, 0
0, 0.22, 0, 0, 1, 0.2770, 0, 1, 0
0, 0.50, 1, 0, 0, 0.6290, 1, 0, 0
0, 0.32, 0, 1, 0, 0.4180, 0, 1, 0
0, 0.21, 0, 0, 1, 0.3560, 1, 0, 0
1, 0.44, 0, 1, 0, 0.5200, 0, 1, 0
1, 0.46, 0, 1, 0, 0.5170, 0, 1, 0
1, 0.62, 0, 1, 0, 0.6970, 1, 0, 0
1, 0.57, 0, 1, 0, 0.6640, 1, 0, 0
0, 0.67, 0, 0, 1, 0.7580, 0, 0, 1
1, 0.29, 1, 0, 0, 0.3430, 0, 0, 1
1, 0.53, 1, 0, 0, 0.6010, 1, 0, 0
0, 0.44, 1, 0, 0, 0.5480, 0, 1, 0
1, 0.46, 0, 1, 0, 0.5230, 0, 1, 0
0, 0.20, 0, 1, 0, 0.3010, 0, 1, 0
0, 0.38, 1, 0, 0, 0.5350, 0, 1, 0
1, 0.50, 0, 1, 0, 0.5860, 0, 1, 0
1, 0.33, 0, 1, 0, 0.4250, 0, 1, 0
0, 0.33, 0, 1, 0, 0.3930, 0, 1, 0
1, 0.26, 0, 1, 0, 0.4040, 1, 0, 0
1, 0.58, 1, 0, 0, 0.7070, 1, 0, 0
1, 0.43, 0, 0, 1, 0.4800, 0, 1, 0
0, 0.46, 1, 0, 0, 0.6440, 1, 0, 0
1, 0.60, 1, 0, 0, 0.7170, 1, 0, 0
0, 0.42, 1, 0, 0, 0.4890, 0, 1, 0
0, 0.56, 0, 0, 1, 0.5640, 0, 0, 1
0, 0.62, 0, 1, 0, 0.6630, 0, 0, 1
0, 0.50, 1, 0, 0, 0.6480, 0, 1, 0
1, 0.47, 0, 0, 1, 0.5200, 0, 1, 0
0, 0.67, 0, 1, 0, 0.8040, 0, 0, 1
0, 0.40, 0, 0, 1, 0.5040, 0, 1, 0
1, 0.42, 0, 1, 0, 0.4840, 0, 1, 0
1, 0.64, 1, 0, 0, 0.7200, 1, 0, 0
0, 0.47, 1, 0, 0, 0.5870, 0, 0, 1
1, 0.45, 0, 1, 0, 0.5280, 0, 1, 0
0, 0.25, 0, 0, 1, 0.4090, 1, 0, 0
1, 0.38, 1, 0, 0, 0.4840, 1, 0, 0
1, 0.55, 0, 0, 1, 0.6000, 0, 1, 0
0, 0.44, 1, 0, 0, 0.6060, 0, 1, 0
1, 0.33, 1, 0, 0, 0.4100, 0, 1, 0
1, 0.34, 0, 0, 1, 0.3900, 0, 1, 0
1, 0.27, 0, 1, 0, 0.3370, 0, 0, 1
1, 0.32, 0, 1, 0, 0.4070, 0, 1, 0
1, 0.42, 0, 0, 1, 0.4700, 0, 1, 0
0, 0.24, 0, 0, 1, 0.4030, 1, 0, 0
1, 0.42, 0, 1, 0, 0.5030, 0, 1, 0
1, 0.25, 0, 0, 1, 0.2800, 0, 0, 1
1, 0.51, 0, 1, 0, 0.5800, 0, 1, 0
0, 0.55, 0, 1, 0, 0.6350, 0, 0, 1
1, 0.44, 1, 0, 0, 0.4780, 0, 0, 1
0, 0.18, 1, 0, 0, 0.3980, 1, 0, 0
0, 0.67, 0, 1, 0, 0.7160, 0, 0, 1
1, 0.45, 0, 0, 1, 0.5000, 0, 1, 0
1, 0.48, 1, 0, 0, 0.5580, 0, 1, 0
0, 0.25, 0, 1, 0, 0.3900, 0, 1, 0
0, 0.67, 1, 0, 0, 0.7830, 0, 1, 0
1, 0.37, 0, 0, 1, 0.4200, 0, 1, 0
0, 0.32, 1, 0, 0, 0.4270, 0, 1, 0
1, 0.48, 1, 0, 0, 0.5700, 0, 1, 0
0, 0.66, 0, 0, 1, 0.7500, 0, 0, 1
1, 0.61, 1, 0, 0, 0.7000, 1, 0, 0
0, 0.58, 0, 0, 1, 0.6890, 0, 1, 0
1, 0.19, 1, 0, 0, 0.2400, 0, 0, 1
1, 0.38, 0, 0, 1, 0.4300, 0, 1, 0
0, 0.27, 1, 0, 0, 0.3640, 0, 1, 0
1, 0.42, 1, 0, 0, 0.4800, 0, 1, 0
1, 0.60, 1, 0, 0, 0.7130, 1, 0, 0
0, 0.27, 0, 0, 1, 0.3480, 1, 0, 0
1, 0.29, 0, 1, 0, 0.3710, 1, 0, 0
0, 0.43, 1, 0, 0, 0.5670, 0, 1, 0
1, 0.48, 1, 0, 0, 0.5670, 0, 1, 0
1, 0.27, 0, 0, 1, 0.2940, 0, 0, 1
0, 0.44, 1, 0, 0, 0.5520, 1, 0, 0
1, 0.23, 0, 1, 0, 0.2630, 0, 0, 1
0, 0.36, 0, 1, 0, 0.5300, 0, 0, 1
1, 0.64, 0, 0, 1, 0.7250, 1, 0, 0
1, 0.29, 0, 0, 1, 0.3000, 0, 0, 1
0, 0.33, 1, 0, 0, 0.4930, 0, 1, 0
0, 0.66, 0, 1, 0, 0.7500, 0, 0, 1
0, 0.21, 0, 0, 1, 0.3430, 1, 0, 0
1, 0.27, 1, 0, 0, 0.3270, 0, 0, 1
1, 0.29, 1, 0, 0, 0.3180, 0, 0, 1
0, 0.31, 1, 0, 0, 0.4860, 0, 1, 0
1, 0.36, 0, 0, 1, 0.4100, 0, 1, 0
1, 0.49, 0, 1, 0, 0.5570, 0, 1, 0
0, 0.28, 1, 0, 0, 0.3840, 1, 0, 0
0, 0.43, 0, 0, 1, 0.5660, 0, 1, 0
0, 0.46, 0, 1, 0, 0.5880, 0, 1, 0
1, 0.57, 1, 0, 0, 0.6980, 1, 0, 0
0, 0.52, 0, 0, 1, 0.5940, 0, 1, 0
0, 0.31, 0, 0, 1, 0.4350, 0, 1, 0
0, 0.55, 1, 0, 0, 0.6200, 0, 0, 1
1, 0.50, 1, 0, 0, 0.5640, 0, 1, 0
1, 0.48, 0, 1, 0, 0.5590, 0, 1, 0
0, 0.22, 0, 0, 1, 0.3450, 1, 0, 0
1, 0.59, 0, 0, 1, 0.6670, 1, 0, 0
1, 0.34, 1, 0, 0, 0.4280, 0, 0, 1
0, 0.64, 1, 0, 0, 0.7720, 0, 0, 1
1, 0.29, 0, 0, 1, 0.3350, 0, 0, 1
0, 0.34, 0, 1, 0, 0.4320, 0, 1, 0
0, 0.61, 1, 0, 0, 0.7500, 0, 0, 1
1, 0.64, 0, 0, 1, 0.7110, 1, 0, 0
0, 0.29, 1, 0, 0, 0.4130, 1, 0, 0
1, 0.63, 0, 1, 0, 0.7060, 1, 0, 0
0, 0.29, 0, 1, 0, 0.4000, 1, 0, 0
0, 0.51, 1, 0, 0, 0.6270, 0, 1, 0
0, 0.24, 0, 0, 1, 0.3770, 1, 0, 0
1, 0.48, 0, 1, 0, 0.5750, 0, 1, 0
1, 0.18, 1, 0, 0, 0.2740, 1, 0, 0
1, 0.18, 1, 0, 0, 0.2030, 0, 0, 1
1, 0.33, 0, 1, 0, 0.3820, 0, 0, 1
0, 0.20, 0, 0, 1, 0.3480, 1, 0, 0
1, 0.29, 0, 0, 1, 0.3300, 0, 0, 1
0, 0.44, 0, 0, 1, 0.6300, 1, 0, 0
0, 0.65, 0, 0, 1, 0.8180, 1, 0, 0
0, 0.56, 1, 0, 0, 0.6370, 0, 0, 1
0, 0.52, 0, 0, 1, 0.5840, 0, 1, 0
0, 0.29, 0, 1, 0, 0.4860, 1, 0, 0
0, 0.47, 0, 1, 0, 0.5890, 0, 1, 0
1, 0.68, 1, 0, 0, 0.7260, 0, 0, 1
1, 0.31, 0, 0, 1, 0.3600, 0, 1, 0
1, 0.61, 0, 1, 0, 0.6250, 0, 0, 1
1, 0.19, 0, 1, 0, 0.2150, 0, 0, 1
1, 0.38, 0, 0, 1, 0.4300, 0, 1, 0
0, 0.26, 1, 0, 0, 0.4230, 1, 0, 0
1, 0.61, 0, 1, 0, 0.6740, 1, 0, 0
1, 0.40, 1, 0, 0, 0.4650, 0, 1, 0
0, 0.49, 1, 0, 0, 0.6520, 0, 1, 0
1, 0.56, 1, 0, 0, 0.6750, 1, 0, 0
0, 0.48, 0, 1, 0, 0.6600, 0, 1, 0
1, 0.52, 1, 0, 0, 0.5630, 0, 0, 1
0, 0.18, 1, 0, 0, 0.2980, 1, 0, 0
0, 0.56, 0, 0, 1, 0.5930, 0, 0, 1
0, 0.52, 0, 1, 0, 0.6440, 0, 1, 0
0, 0.18, 0, 1, 0, 0.2860, 0, 1, 0
0, 0.58, 1, 0, 0, 0.6620, 0, 0, 1
0, 0.39, 0, 1, 0, 0.5510, 0, 1, 0
0, 0.46, 1, 0, 0, 0.6290, 0, 1, 0
0, 0.40, 0, 1, 0, 0.4620, 0, 1, 0
0, 0.60, 1, 0, 0, 0.7270, 0, 0, 1
1, 0.36, 0, 1, 0, 0.4070, 0, 0, 1
1, 0.44, 1, 0, 0, 0.5230, 0, 1, 0
1, 0.28, 1, 0, 0, 0.3130, 0, 0, 1
1, 0.54, 0, 0, 1, 0.6260, 1, 0, 0

Test data:

# people_test.txt
#
0, 0.51, 1, 0, 0, 0.6120, 0, 1, 0
0, 0.32, 0, 1, 0, 0.4610, 0, 1, 0
1, 0.55, 1, 0, 0, 0.6270, 1, 0, 0
1, 0.25, 0, 0, 1, 0.2620, 0, 0, 1
1, 0.33, 0, 0, 1, 0.3730, 0, 0, 1
0, 0.29, 0, 1, 0, 0.4620, 1, 0, 0
1, 0.65, 1, 0, 0, 0.7270, 1, 0, 0
0, 0.43, 0, 1, 0, 0.5140, 0, 1, 0
0, 0.54, 0, 1, 0, 0.6480, 0, 0, 1
1, 0.61, 0, 1, 0, 0.7270, 1, 0, 0
1, 0.52, 0, 1, 0, 0.6360, 1, 0, 0
1, 0.30, 0, 1, 0, 0.3350, 0, 0, 1
1, 0.29, 1, 0, 0, 0.3140, 0, 0, 1
0, 0.47, 0, 0, 1, 0.5940, 0, 1, 0
1, 0.39, 0, 1, 0, 0.4780, 0, 1, 0
1, 0.47, 0, 0, 1, 0.5200, 0, 1, 0
0, 0.49, 1, 0, 0, 0.5860, 0, 1, 0
0, 0.63, 0, 0, 1, 0.6740, 0, 0, 1
0, 0.30, 1, 0, 0, 0.3920, 1, 0, 0
0, 0.61, 0, 0, 1, 0.6960, 0, 0, 1
0, 0.47, 0, 0, 1, 0.5870, 0, 1, 0
1, 0.30, 0, 0, 1, 0.3450, 0, 0, 1
0, 0.51, 0, 0, 1, 0.5800, 0, 1, 0
0, 0.24, 1, 0, 0, 0.3880, 0, 1, 0
0, 0.49, 1, 0, 0, 0.6450, 0, 1, 0
1, 0.66, 0, 0, 1, 0.7450, 1, 0, 0
0, 0.65, 1, 0, 0, 0.7690, 1, 0, 0
0, 0.46, 0, 1, 0, 0.5800, 1, 0, 0
0, 0.45, 0, 0, 1, 0.5180, 0, 1, 0
0, 0.47, 1, 0, 0, 0.6360, 1, 0, 0
0, 0.29, 1, 0, 0, 0.4480, 1, 0, 0
0, 0.57, 0, 0, 1, 0.6930, 0, 0, 1
0, 0.20, 1, 0, 0, 0.2870, 0, 0, 1
0, 0.35, 1, 0, 0, 0.4340, 0, 1, 0
0, 0.61, 0, 0, 1, 0.6700, 0, 0, 1
0, 0.31, 0, 0, 1, 0.3730, 0, 1, 0
1, 0.18, 1, 0, 0, 0.2080, 0, 0, 1
1, 0.26, 0, 0, 1, 0.2920, 0, 0, 1
0, 0.28, 1, 0, 0, 0.3640, 0, 0, 1
0, 0.59, 0, 0, 1, 0.6940, 0, 0, 1
This entry was posted in Machine Learning. Bookmark the permalink.

2 Responses to Neural Network Regression From Scratch With Batch Training Using C#

  1. Thorsten Kleppe says:

    Very nice result!

    But change:
    this.output = new double[numOut]; // [1]
    to:
    this.oNodes= new double[numOut]; // [1]

  2. bardeh2014 says:

    Thank you for this insight 🙂
    Picked it up at: https://visualstudiomagazine.com/Articles/2023/10/18/neural-network-regression.aspx

    To get it to run with the number-format on a machine with Norwegian settings and locale, I changed the NumberFormatInfo in parsing of the numbers to Invariant, otherwise I would need to use ‘,’ as a decimal point.

    Two simple changes in the code:
    listWts.Add(double.Parse(line)); => listWts.Add(double.Parse(line, NumberFormatInfo.InvariantInfo));

    result[i][j] = double.Parse(tokens[k]); => result[i][j] = double.Parse(tokens[k], NumberFormatInfo.InvariantInfo);

Leave a comment