IMPORTANT: consider that the code snippets in this article are subject to changes for improvement and completion.

Neural Networks in JavaScript

What Neural Networks and JavaScript have in common? Well, probably more than what you think! They probably are a sort of match made in the heaven and we are going to see always more the effect of their influence in the coming years.

They share a strange history of misunderstanding and/or undervaluation:

A 'Brief' History of Neural Nets and Deep LearningNeural Networks

The World's Most Misunderstood Programming LanguageJavaScript

They share an unexpected destiny too:

The Rise of Neural Networks and Deep Learning in Our Everyday LivesNeural Networks

JavaScript: from alert() to Machine LearningJavaScript

Neural Networks and JavaScript are today's buzzwords but what are they in few words and how can we use them?

A Neural Network is an Universal Approximator or, to say with different words, a flexible function that autonomously adapts its behaviour to satisfy as much as possible the relation between the number[s] passed to it as parameter[s] and the number[s] expected to be returned as relative result[s].

We all learned that "3 * 5 = 15" and that the multiplication concept behind this example ("A * B = C") has a linear (constant) behaviour in respect of all the other numbers that we could pass as parameters into "A, B" and expect to be returned as result into "C".

Because of this linearity it is simple to describe an algorithmic rule that programmatically (through a computing machine) calculate the output (15) starting from the inputs (3, 5).

When we can't rely on linearity (like when using animal photos as parameters and expecting the name of the species to be returned as result) we may end to find that is very difficult, if not impossible, to define an algorithmic approach that can accomplish that task.

This is the moment in which we need to use a flexible function that autonomously adapts its behaviour in respect to known examples (expected output[s] for relative input[s]). A function that, one time it has adapted its behaviour to the given examples, we suppose it should be able to return us almost correct output[s] in relation to any other possible input[s].

Well, in the reality the output[s] will sometimes be not even "enough close" to be correct, but isn't this the same error that, for example, we make as humans when we do not recognize a cat in a strange photo?

Imperfection is creativity.
Imperfection is the base for dynamicity.
Imperfection is part of the beautiful perfection in nature diversity.

If you want to read a simple and beautiful guide about Neural Networks try this one from Andrej Karphaty: An hacker's guide to Neural Networks.

JavaScript is instead an Universal Utility or, to say with different words, a flexible language that people all around the world continuously adapt to be used everywhere, everytime and in everyway. It does not matter how much we love or hate it, JavaScript is just very useful and simply allows us to do pretty much everything thanks to its "distributed and abstract platform" that spans from browsers to embedded devices passing through desktops and servers.

We all know that JavaScript has its limitations and problems but we know also that it is evolving and that this is happening very fast. This evolution is like an emergent behaviour in response to a world-wide converging demand: how to make everyone able to code and to produce results faster, simplier and better?

Any programming language has its own good and bad points in helping us to do our job faster, simplier and better but sometimes this is not even a matter of the language itself rather of the surrounding ecosystem of libraries, frameworks and architectures.

And here we go to see that, in the end, it is the developer that makes the difference using his experience, his perseverance, his talent, his connection with the working group as a whole and his ability to collaborate with the vibrant community that pushes him to improve himself and supports him when he feels in trouble.

That's it! This is the real reason why JavaScript is so much used! It is because of us being humans, eventually as developers too, and because of the ideas that we want to implement. It is because we are at center but not the technology itself. It is because JavaScript is just a lightweight solution and environment that gets everyday more adapted around the needs of millions of people like us, like you. So lightweight that sometimes we just forget its imperfections and feel happy and enthusiast because it so simply empowers us to create amazing things!

And again, as with Neural Networks, imperfection still wins.

Imperfection enforces creativity.
Imperfection represents the base of dynamism.
Imperfection produces the beautiful diversity that we see in the community.

If you want to read a simple and beautiful guide about JavaScript try this one from Marijn Haverbeke: Eloquent JavaScript.

Are you still here? If yes it really means that you want to see some magic doing Neural Networks in JavaScript.

Let's do it then!

At moment there are various interesting Neural Networks implementations in the JavaScript land. One of them, DN2A, is a project of mine but I'll show it in the end giving precedence to other really good works from really good developers.

You should understand that lots of different Types of Neural Networks exist and that, while some of them can be applied in many different contexts, there are cases in which you could prefer to use a specific one that is better suited in terms of speed and quality.

The libraries that we are going to see are:

Brain.js
ConvNetJS
Mind.js
Synaptic.js
DN2A

The problem that we are going to resolve is the creation of a simple system that learns how to produce the correct relative output[s] from the input[s] of the XOR table.

While all the above mentioned libraries work equally well in Node.js and Browsers, I suppose for simplicity that you will experiment with them through a Node.js environment. If you do not have Node.js installed please find a suitable package for your O.S. and be sure to install the NPM package too.

Brain.js

Install the library through npm install brain into a folder of your choice.
Create a file named test.js in the same folder and save the following code into it.
Execute the application through node test.js staying in the same folder.

var brain = require("brain");

    var neuralNetwork = new brain.NeuralNetwork({
        hiddenLayers: [4, 4],
        momentum: 0.7
    });

    var trainingPatterns = [
        {
            input: [0, 0],
            output: [0]
        },
        {
            input: [0, 1],
            output: [1]
        },
        {
            input: [1, 0],
            output: [1]
        },
        {
            input: [1, 1],
            output: [0]
        }
    ];
    neuralNetwork.train(
        trainingPatterns,
        {
        errorThresh: 0.005,
        iterations: 50000,
        log: true,
        logPeriod: 1,
        learningRate: 0.3
        }
    );

    var inputPatterns = [
        [0, 0],
        [0, 1],
        [1, 0],
        [1, 1]
    ];
    inputPatterns.forEach(function(inputPatten) {
            console.log("[" + inputPatten.join(", ") + "] => [" + neuralNetwork.run(inputPatten) + "]");
    });

ConvNetJS
Example creation in progress.

Mind.js

Install the library through npm install node-mind into a folder of your choice.
Create a file named test.js in the same folder and save the following code into it.
Execute the application through node test.js staying in the same folder.

var Mind = require("node-mind");

    var mind = Mind({
        activator: "sigmoid",
        learningRate: 0.7,
        iterations: 20000,
        hiddenLayers: 2,
        hiddenUnits: 4,
    });

    mind.on("data", function(iteration, errors, results) {
    console.log(iteration);
    });

    mind.learn([
        {
            input: [0, 0],
            output: [0]
        },
        {
            input: [0, 1],
            output: [1]
        },
        {
            input: [1, 0],
            output: [1]
        },
        {
            input: [1, 1],
            output: [0]
        }
    ]);

    console.log("[0, 0] => [" + mind.predict([0, 0]) + "]");
    console.log("[0, 1] => [" + mind.predict([0, 1]) + "]");
    console.log("[1, 0] => [" + mind.predict([1, 0]) + "]");
    console.log("[1, 1] => [" + mind.predict([1, 1]) + "]");

Synaptic.js

Install the library through npm install synaptic into a folder of your choice.
Create a file named test.js in the same folder and save the following code into it.
Execute the application through node test.js staying in the same folder.

var synaptic = require("synaptic");

    var inputLayer = new synaptic.Layer(2);
    var hiddenLayer1 = new synaptic.Layer(4);
    var hiddenLayer2 = new synaptic.Layer(4);
    var outputLayer = new synaptic.Layer(1);

    inputLayer.project(hiddenLayer1);
    hiddenLayer1.project(hiddenLayer2);
    hiddenLayer2.project(outputLayer);

    var myNetwork = new synaptic.Network({
        input: inputLayer,
        hidden: [
            hiddenLayer1,
            hiddenLayer2
        ],
        output: outputLayer
    });

    var trainer = new synaptic.Trainer(myNetwork);

    var trainingSet = [
        {
            input: [0, 0],
            output: [0]
        },
        {
            input: [0, 1],
            output: [1]
        },
        {
            input: [1, 0],
            output: [1]
        },
        {
            input: [1, 1],
            output: [0]
        }
    ];

    trainer.train(
        trainingSet,
        {
            rate: .3,
            iterations: 1000000,
            error: .005,
            shuffle: true,
            log: 1,
            cost: synaptic.Trainer.cost.MSE
        }
    );

    console.log("[0, 0] => [" + myNetwork.activate([0, 0]) + "]"); // [0.015020775950893527]
    console.log("[0, 1] => [" + myNetwork.activate([0, 1]) + "]"); // [0.9815816381088985]
    console.log("[1, 0] => [" + myNetwork.activate([1, 0]) + "]"); // [0.9871822457132193]
    console.log("[1, 1] => [" + myNetwork.activate([1, 1]) + "]"); // [0.012950087641929467]

DN2A

Install the library through npm install dn2a into a folder of your choice.
Create a file named test.js in the same folder and save the following code into it.
Execute the application through node test.js staying in the same folder.

var DN2A = require("dn2a");

    var neuralNetwork = new DN2A.NetworkAlpha({
        layerDimensions: [2, 4, 4, 1],
        learningMode: "continuous",
        learningRate: 0.3,
        momentumRate: 0.7,
        maximumError: 0.005,
        maximumEpoch: 50000,
        dataRepository: {},
        neuron: {
            generator: DN2A.Neuron
        },
        synapse: {
            generator: DN2A.Synapse
        },
        numbersPrecision: 32
    });

    var trainingPatterns = [
        {
            input: [0, 0],
            output: [0]
        },
        {
            input: [0, 1],
            output: [1]
        },
        {
            input: [1, 0],
            output: [1]
        },
        {
            input: [1, 1],
            output: [0]
        }
    ];
    neuralNetwork.train(trainingPatterns, function(trainingStatus) {
        console.log("Epoch: " + trainingStatus.elapsedEpochCounter);
    });

    var inputPatterns = [
        [0, 0],
        [0, 1],
        [1, 0],
        [1, 1]
    ];
    neuralNetwork.query(inputPatterns, function(queryingStatus) {
        inputPatterns.forEach(function(inputPatten, inputPatternIndex) {
            console.log("[" + inputPatterns[inputPatternIndex].join(", ") + "] => [" + queryingStatus.outputPatterns[inputPatternIndex].join(", ") + "]");
        });
    });

Have these examples given you some ideas?
Then start to code and show to people the incredible things that you create with Neural Networks in JavaScript.

The world is full of opportunities and diversity!