7.6 KiB
layout, title, recline-deps, root
| layout | title | recline-deps | root |
|---|---|---|---|
| container | Loading data from different sources using Backends - Tutorial | true | ../ |
Loading data from different sources using Backends
These set of examples will show you how you can load data from different
sources such as Google Docs or the DataHub using Recline
Note: often you are loading data from a given source in order to load it into a Recline Dataset and display it in a View. However, you can also happily use a Backend to load data on its own without using any other part of the Recline library as all the Backends are designed to have no dependency on other parts of Recline.
Overview
Backends connect Dataset and Documents to data from a specific 'Backend' data source. They provide methods for loading and saving Datasets and individuals Documents as well as for bulk loading via a query API and doing bulk transforms on the backend.
Backends come in 2 flavours:
- Loader backends - only implement fetch method. The data is then cached in a Memory.Store on the Dataset and interacted with there. This is best for sources which just allow you to load data or where you want to load the data once and work with it locally.
- Store backends - these support fetch, query and, if write-enabled, save. These are suitable where the backend contains a lot of data (infeasible to load locally - for examples a million rows) or where the backend has capabilities you want to take advantage of.
Instantiation and Use
You can use a backend directly e.g.
{% highlight javascript %} var backend = recline.Backend.ElasticSearch.fetch({url: ...}); {% endhighlight %}
But more usually the backend will be created or loaded for you by Recline and all you need is provide the identifier for that Backend e.g.
{% highlight javascript %} var dataset = recline.Model.Dataset({ backend: 'backend-identifier' }); {% endhighlight %}
Backend identifiers How do you know the backend identifier for a given Backend? It's just the name of the 'class' in recline.Backend module (but case-insensitive). E.g. recline.Backend.ElasticSearch can be identified as 'ElasticSearch' or 'elasticsearch'.
What Backends are available from Recline? {% include backend-list.html %}
Backend you'd like to see not available? It's easy to write your own – see the Backend reference docs for details of the required API.
Preparing your app
This is as per the quickstart but the set of files is much more limited if you are just using a Backend. Specifically:
{% highlight html %}
{% endhighlight %}
Loading Data from Google Docs
We will be using the following Google Doc. For Recline to be able to access a Google Spreadsheet it must have been 'Published to the Web' (enabled via File -> Publish to the Web menu).
{% highlight javascript %} {% include example-backends-gdocs.js %} {% endhighlight %}
Result
Loading Data from ElasticSearch and the DataHub
Recline supports ElasticSearch as a full read/write/query backend. It also means that Recline can load data from the DataHub's data API as that is ElasticSearch compatible. Here's an example, using this dataset about Rendition flights on the DataHub:
{% highlight javascript %} {% include example-backends-elasticsearch.js %} {% endhighlight %}
Result
Loading data from CSV files
For loading data from CSV files there are 3 cases:
- CSV is online but on same domain or supporting CORS (S3 and Google Storage support CORS!) -- we can then load using AJAX (as no problems with same origin policy)
- CSV is on local disk -- if your browser supports HTML5 File API we can load the CSV file off disk
- CSV is online but not on same domain -- use DataProxy (see below)
Local online CSV file
Let's start with first case: loading a "local" online CSV file. We'll be using this example file.
{% highlight javascript %} {% include example-backends-online-csv.js %} {% endhighlight %}
Result
CSV file on disk
This requires your browser to support the HTML5 file API. Suppose we have a file input like:
Then we can load the file into a Recline Dataset as follows:
{% highlight javascript %} {% include example-backends-csv-disk.js %} {% endhighlight %}
Try it out!
Try it out by clicking on the file input above, selecting a CSV file and seeing what happens.
Loading data from CSV and Excel files online using DataProxy
The DataProxy is a web-service run by the Open Knowledge Foundation that converts CSV and Excel files to JSON. It has a convenient JSON-p-able API which means we can use it to load data from online CSV and Excel into Recline Datasets.
Recline ships with a simple DataProxy "backend" that takes care of fetching data from the DataProxy source.
The main limitation of the DataProxy is that it can only handle Excel files up to a certain size (5mb) and that as we must use JSONP to access it error information can be limited.
{% highlight javascript %} {% include example-backends-dataproxy.js %} {% endhighlight %}
Result
Customizing the timeout
As we must use JSONP in this backend we have the problem that if DataProxy errors (e.g. 500) this won't be picked up. To deal with this and prevent the case where the request never finishes We have a timeout on the request after which the Backend sends back an error stating that request timed out.
You can customize the length of this timeout by setting the following constant:
{% highlight javascript %} // Customize the timeout (in milliseconds) - default is 5000 recline.Backend.DataProxy.timeout = 10000; {% endhighlight %}