How to run winkNLP in the browser
WinkNLP is designed to work on Node.js and web browsers both. Apart from building a server side solution using Node.js, you can build a pure browser side NLP app with equal ease. To do this, we need to use the web version of the English lite model — wink-eng-lite-web-model. We'll also need a tool that can bundle all the required modules, something like Webpack or Browserify. For the purpose of this tutorial we'll use Browserify. First, lets install the required packages:
npm install wink-nlp --save npm install wink-eng-lite-web-model --save npm install -g browserify
token-counter.js and require winkNLP, some helpers, and the web model:
const winkNLP = require( 'wink-nlp' ); const model = require( 'wink-eng-lite-web-model' ); const nlp = winkNLP( model ) // Acquire "its" and "as" helpers from nlp. const its = nlp.its; const as = nlp.as; const text = `Its quarterly profits jumped 76% to $1.13 billion for the three months to December, from $639million of previous year.`; const doc = nlp.readDoc( text ); doc.entities().each((e) => e.markup()); document.getElementById("result").innerHTML = doc.out(its.markedUpText);
Now, we'll use Broswerify to bundle all the required modules into a single file:
browserify token-counter.js -o bundle.js
This will create a new file called
bundle.js which you can include in your HTML as you would any other:
<div id="result"></div> <script src="bundle.js" charset="utf-8"></script>
This will create the following output:
Its quarterly profits jumped 76% to $1.13 billion for the three months to December, from $639million of previous year.
It is important to note that this is a fully-featured English language model. Make sure to use
gzip when you serve it on the web. This will reduce its size to under 1MB (from the uncompressed 3.5MB). Also, setting an appropriate cache header will ensure that the client doesn't have to download it multiple times.