Compare commits

..

95 Commits

Author SHA1 Message Date
47b4ffece8 Update packages/remark-wiki-link/src/lib/remarkWikiLink.ts
Some checks failed
Release / Release (push) Failing after 1h6m8s
2025-04-08 18:12:43 +00:00
Rufus Pollock
816db6858c [site/content/docs/dms][s]: remove this directory as duplicate of content on tech.datopian.com/ especially tech.datopian.com/dms. 2025-02-09 22:59:31 +00:00
github-actions[bot]
0381f2fccf Version Packages 2025-01-22 16:37:23 +01:00
Ola Rubaj
62dbc35d3b fix(LineChart): skip lines at invalid/missing data points (don't force connect) 2025-01-22 16:23:17 +01:00
Lucas Morais Bispo
12f0d0d732
Merge pull request #1347 from datopian/feature/update-links
[md][datahub] updated links from portaljs.org to portaljs.com
2025-01-13 08:52:50 -03:00
muhammad-hassan11
d80d1f5012 removed logs 2025-01-13 16:22:01 +05:00
Anuar Ustayev (aka Anu)
af5b6b7a29
Rename/rebrand from datahub to portaljs.
DataHub.io is becoming something different, e.g., hub for data OR data market[place] while PortalJS.com is a cloud platform for creating managed data portals.
2024-12-24 10:43:26 +05:00
muhammad-hassan11
8487175f01 [md][datahub] updated links from portaljs.org to portaljs.com 2024-12-23 21:57:09 +05:00
Anuar Ustayev (aka Anu)
6551576700
Change back to PortalJS name for data portals. 2024-12-23 11:10:39 +05:00
Lucas Morais Bispo
4fccb2945f
Merge pull request #1346 from datopian/fix/dotorgmerging
[site][WIP] Seo - update title, canonical
2024-12-05 19:35:07 -03:00
lucasmbispo
a9025e5cbe [site]:seo - update title, canonical 2024-12-05 08:14:18 -03:00
github-actions[bot]
ad5a176e85 Version Packages 2024-11-11 15:52:06 +01:00
Ola Rubaj
eeb480e8cf [fix][xs]: allow yearmonth TimeUnit in LineChart 2024-11-11 15:40:07 +01:00
github-actions[bot]
30fcb256b2 Version Packages 2024-10-24 08:53:23 +02:00
Ola Rubaj
a4f8c0ed76 [chore][xs]: update package-lock 2024-10-24 08:46:51 +02:00
Ola Rubaj
829f3b1f13 [chore][xs]: fix formatting 2024-10-24 08:46:27 +02:00
Ola Rubaj
836b143a31 [fix][xs]: make tileLayerName in Map optional 2024-10-24 08:45:51 +02:00
github-actions[bot]
be38086794 Version Packages 2024-10-23 18:08:18 +02:00
Ola Rubaj
63d9e3b754
[feat,LineChart][s]: support for multiple series 2024-10-23 18:03:07 +02:00
Anuar Ustayev (aka Anu)
f86f0541eb
Merge pull request #1332 from datopian/site/fix-showcases
[portaljs site][showcases][s] Merge examples into Showcases tab
2024-10-11 09:36:16 +05:00
Lucas Morais Bispo
64bc212384
Update README.md 2024-10-09 11:46:02 -03:00
Lucas Morais Bispo
1e7daf353d
Add files via upload 2024-10-09 11:28:42 -03:00
lucasmbispo
cc69dabf80 [site][showcases] update examples 2024-10-03 21:04:06 -03:00
lucasmbispo
a5d87712e0 [site][showcases][s] Merge examples into Showcases tab 2024-10-01 11:07:33 -03:00
Rufus Pollock
86834fd1a6
Merge pull request #1317 from loleg/patch-1
Fix link to Next.js in README.md
2024-09-20 13:30:02 +02:00
Oleg Lavrovsky
8a661b1617
Fix link to Next.js in README.md 2024-09-20 11:23:06 +02:00
Rufus Pollock
1baebc3f3c
Merge pull request #1200 from rzmk/patch-1
[#1181, examples/ckan-ssg][xs]: update example generation command
2024-07-05 19:13:43 +02:00
João Demenech
bbac4954f5
Merge pull request #1202 from datopian/changeset-release/main
Version Packages
2024-06-24 17:58:02 -03:00
github-actions[bot]
be6b184884 Version Packages 2024-06-24 20:47:23 +00:00
João Demenech
64103d6488
Merge pull request #1122 from datopian/feature/custom-tile-layer
Custom Tile Layer for Map Component
2024-06-24 17:44:19 -03:00
Demenech
8e3496782c version: add changeset 2024-06-24 17:42:49 -03:00
Mueez Khan
e034503399
[examples/ckan-ssg][xs]: update command to create project 2024-06-22 00:17:49 -04:00
William Lima
93ae498ec2 Code cleanup 2024-06-19 10:10:56 -01:00
William Lima
97e43fdcba add mapbox as default basemap 2024-06-18 22:37:20 -01:00
William Lima
32f29024f8 attr replace fix 2024-06-18 22:05:41 -01:00
William Lima
134f72948c Add TileLayer Presets configuration 2024-06-18 22:01:59 -01:00
Rufus Pollock
c1f2c526a8 [#1181,site][xs]: change portaljs to datahub in github repo references. 2024-06-10 19:31:43 +02:00
João Demenech
8feb87739d
Merge pull request #1173 from datopian/changeset-release/main
Version Packages
2024-06-09 08:06:43 -03:00
github-actions[bot]
3a07267e44 Version Packages 2024-06-09 09:25:23 +00:00
Rufus Pollock
3f19ca16ed
[#1118,docs/portaljs][s 2024-06-09 11:22:25 +02:00
João Demenech
5deabac5fe
Merge pull request #1170 from datopian/fix/iframe-height
[components][iFrame] Change default height
2024-06-04 14:57:24 -03:00
lucasmbispo
96901150c6 [changesets] change major to patch 2024-06-04 09:38:47 -03:00
lucasmbispo
9ff25ed7c4 [components][iFrame] Change iFrame height 2024-06-04 09:38:12 -03:00
lucasmbispo
8f884fceab [components][iFrame] Change default height 2024-06-04 09:26:30 -03:00
Anuar Ustayev (aka Anu)
7094eded50
Merge pull request #1167 from datopian/fix/map-geojson
Fix: autoZoomConfiguration not working properly when the geojson parameter is passed
2024-06-04 14:06:45 +05:00
Rufus Pollock
30e7c6379f
Merge pull request #1069 from marcchehab/patch-2 - Add SiteToc to MobileNav.
This PR adds the SiteToc to the MobileNav. It also fixes double type declarations in MobileNav by importing the interfaces from Nav. Adding SiteToc was then just a matter of uncommenting code that was there already.
2024-05-31 17:16:42 +02:00
Ronaldo Campos
feada58932 Fix: autoZoomConfiguration not working properly when the geojson parameter is passed 2024-05-31 11:37:01 -03:00
William Lima
31406d48e3 Update Map.tsx 2024-05-31 10:29:15 -01:00
Daniellappv
d6bf344ca3
Update CONTRIBUTING.md 2024-05-31 10:55:58 +03:00
William Lima
d1a5138c6e include configs on .env vars or pass through props 2024-05-22 11:48:20 -01:00
William Lima
a6047a9341 Implements Custom Tile Layer
#1121 adds default tile layer and allows user to pass a tile object to map
2024-05-13 12:51:28 -01:00
Ola Rubaj
a4e60540ae
Merge pull request #1119 from datopian/remark-wiki-link-cleanup
## Changes

- remove unneeded tests
- do not remove "index" from the end of tile path in `getPermalinks` function
2024-05-09 02:20:45 +02:00
Ola Rubaj
e4c456c237 rm changeset file 2024-05-09 02:19:54 +02:00
Ola Rubaj
ce9ebbf41e add changeset file 2024-05-09 02:16:05 +02:00
Ola Rubaj
a8fb176bcc rm test for custom permarlink converter (irrelevant) 2024-05-09 02:12:44 +02:00
Ola Rubaj
2ac82367c5 do not remove "index" from the end of file
- should be treated as a regular file name
- it's up to the app how to interpret those paths/files later
2024-05-09 02:12:38 +02:00
Ola Rubaj
85de6f7878 replace inex.md with README.md in test fixtures 2024-05-09 02:09:52 +02:00
Ola Rubaj
539fffeb55
Merge pull request #1113 from datopian/changeset-release/main
Version Packages
2024-04-18 15:43:31 +02:00
github-actions[bot]
0d276535bd Version Packages 2024-04-18 13:42:23 +00:00
Ola Rubaj
38dd7103a3
Merge pull request #1103 from datopian/feat/portaljs-components-improvements
Components API and docs improvements
Related to: #1089
2024-04-17 16:16:27 +02:00
Ola Rubaj
48cd812a48 add changeset file 2024-04-17 16:14:00 +02:00
Ola Rubaj
7bba10714d refresh package-lock file 2024-04-17 16:13:47 +02:00
github-actions[bot]
de2c1e5b48
Version Packages (#1109)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2024-04-17 15:54:34 +02:00
mohamed yahia
57952e0817
[remark-wiki-link][m]: Add image size adjustment in remark-wiki-link (#1084)
* [remark-wiki-link][m]: Add image size adjustment in `remark-wiki-link`

* [remark-wiki-links][m]: Add image size feature to images
2024-04-15 18:39:27 +02:00
Demenech
df9664624f fix(LineChart): remove unused fillWidth prop 2024-04-09 17:45:12 -03:00
Demenech
2ea185b710 feat: Catalog component API and docs improvements 2024-04-09 17:41:01 -03:00
Demenech
b859d48f17 feat: Map component API and docs improvements 2024-04-09 17:30:45 -03:00
Demenech
3d73ac422e feat: Vega and Vega Lite components API and docs improvements 2024-04-09 17:13:05 -03:00
Demenech
059ffe4e34 feat: PlotlyLineChart component API and docs improvements 2024-04-09 17:08:50 -03:00
Demenech
0aed7dce77 feat: Plotly component docs improvements 2024-04-09 16:57:23 -03:00
Demenech
c202d6cfc4 feat: LineChart component API and docs improvements 2024-04-09 16:50:49 -03:00
Demenech
d9c20528c5 feat: PdfViewer component API and docs improvements 2024-04-09 16:20:01 -03:00
Demenech
b7ee5a1869 feat: Iframe component API and docs improvements 2024-04-09 16:07:12 -03:00
Demenech
4b5d549190 feat: comment out the Table component for now 2024-04-09 15:58:33 -03:00
Demenech
e6f0ab4ec8 feat: FlatUiTable component API and docs improvements 2024-04-09 15:54:03 -03:00
Demenech
22038fbd4f feat: Excel component API and docs improvements 2024-04-09 15:44:37 -03:00
Demenech
8b292a9bf2 feat: group stories in different categories 2024-04-09 15:36:48 -03:00
Demenech
cda3d335f1 feat: rename Plotly components stories so that they show up together on the storybook sidebar 2024-04-09 15:25:14 -03:00
Demenech
fe97cc87f4 fix: OpenLayers and BucketViewer were still showing up 2024-04-09 15:22:55 -03:00
Demenech
88f6199d18 feat: implement new Data interface + review PlotlyBarChart API and docs + hide BucketViewer and OpenLayers 2024-04-09 15:21:08 -03:00
Rufus Pollock
852cf60abc
[README][s]: further refinements of info re DataHub. 2024-04-05 11:50:24 +02:00
Rufus Pollock
704be0d5a7
[README][s]: update portal.js to datahub nomenclature. 2024-04-05 11:28:37 +02:00
Rufus Pollock
fb3598fa49
Delete .vscode/extensions.json
No need for vscode stuff in repo.
2024-04-01 18:18:06 +02:00
Rufus Pollock
d898b5a833
Merge pull request #1065 from marcchehab/patch-1
Fix React warning about unique "key" prop
2024-03-29 14:56:38 +01:00
github-actions[bot]
3aac4dabf9
Version Packages (#1087)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2024-03-22 10:42:53 -03:00
Luccas Mateus
a044f56e3c Changeset 2024-03-22 10:32:11 -03:00
Luccas Mateus
1b58c311eb Plotly components 2024-03-22 10:31:30 -03:00
Rufus Pollock
ed9ac2c263
Delete tools/tsconfig.tools.json
Remove as tools is empty folder.
2024-02-28 16:49:24 +01:00
Rufus Pollock
42c72e5afd
Delete tools/generators/.gitkeep
empty directory
2024-02-28 16:48:34 +01:00
Rufus Pollock
9e1a324fa1
[examples/fivethirtyeight][xs]: link to blog post we wrote about it. 2024-02-13 13:31:39 +01:00
Rufus Pollock
90178af8f2
[examples/fivethirtyeight][s]: add link to demo site to README. 2024-02-13 13:30:57 +01:00
Rufus Pollock
00e61e104c
[site/config][xs]: change discord server to datahub. 2024-02-09 10:27:06 +01:00
luzmediach
1a8e7ac06e NavMobile to use Nav interfaces and add SiteToc to sidebar 2024-01-21 12:48:10 +01:00
marcchehab
4355efe0c4
Update Nav.tsx 2024-01-21 12:36:46 +01:00
marcchehab
9e73410b17
Fix React warning about unique "key" prop
I always get a react warning: Warning: Each child in a list should have a unique "key" prop.

This fixed it and makes for warning-free development 😊
2024-01-04 14:14:49 +01:00
156 changed files with 6246 additions and 13063 deletions

View File

@ -1,8 +0,0 @@
{
"recommendations": [
"nrwl.angular-console",
"esbenp.prettier-vscode",
"firsttris.vscode-jest-runner",
"dbaeumer.vscode-eslint"
]
}

View File

@ -4,7 +4,7 @@ title: Developer docs for contributors
## Our repository
https://github.com/datopian/portaljs
https://github.com/datopian/datahub
Structure:
@ -17,7 +17,7 @@ Structure:
## How to contribute
You can start by checking our [issues board](https://github.com/datopian/portaljs/issues).
You can start by checking our [issues board](https://github.com/datopian/datahub/issues).
If you'd like to work on one of the issues you can:
@ -35,7 +35,7 @@ If you'd like to work on one of the issues you can:
If you have an idea for improvement, and it doesn't have a corresponding issue yet, simply submit a new one.
> [!note]
> Join our [Discord channel](https://discord.gg/rTxfCutu) do discuss existing issues and to ask for help.
> Join our [Discord channel](https://discord.gg/KZSf3FG4EZ) do discuss existing issues and to ask for help.
## Nx

View File

@ -1,31 +1,51 @@
<h1 align="center">
🌀 Portal.JS
<br />
Rapidly build rich data portals using a modern frontend framework
</h1>
<p align="center">
Bugs, issues and suggestions re PortalJS framework
<br />
<br /><a href="https://discord.gg/xfFDMPU9dC"><img src="https://dcbadge.vercel.app/api/server/xfFDMPU9dC" /></a>
</p>
* [What is Portal.JS ?](#What-is-Portal.JS)
* [Features](#Features)
* [For developers](#For-developers)
* [Docs](#Docs)
* [Community](#Community)
* [Appendix](#Appendix)
* [What happened to Recline?](#What-happened-to-Recline?)
## PortalJS framework
# What is Portal.JS
This repo and issue tracker are for
🌀 Portal.JS is a framework for rapidly building rich data portal frontends using a modern frontend approach. Portal.JS can be used to present a single dataset or build a full-scale data catalog/portal.
- PortalJS 🌀 - https://www.portaljs.com/
- DataHub Cloud ☁️ - https://datahub.io/
Built in JavaScript and React on top of the popular [Next.js](https://nextjs.com/) framework. Portal.JS assumes a "decoupled" approach where the frontend is a separate service from the backend and interacts with backend(s) via an API. It can be used with any backend and has out of the box support for [CKAN](https://ckan.org/).
### Issues
## Features
Found a bug: 👉 https://github.com/datopian/portaljs/issues/new
### Discussions
Got a suggestion, a question, want some support or just want to shoot the breeze 🙂
Head to the discussion forum: 👉 https://github.com/datopian/portaljs/discussions
### Chat on Discord
If you would prefer to get help via live chat check out our discord 👉
[Discord](https://discord.gg/xfFDMPU9dC)
### Docs
- For PortalJS go to https://www.portaljs.com/opensource
- For DataHub Cloud https://datahub.io/docs
## PortalJS Cloud 🌀
PortalJS Cloud 🌀 is a platform for rapidly creating rich data portal and publishing systems using a modern frontend approach. PortalJS Cloud can be used to publish a single dataset or build a full-scale data catalog/portal.
PortalJS Cloud is built in JavaScript and React on top of the popular [Next.js](https://nextjs.org) framework. PortalJS Cloud assumes a "decoupled" approach where the frontend is a separate service from the backend and interacts with backend(s) via an API. It can be used with any backend and has out of the box support for [CKAN](https://ckan.org/), GitHub, Frictionless Data Packages and more.
### Features
- 🗺️ Unified sites: present data and content in one seamless site, pulling datasets from a DMS (e.g. CKAN) and content from a CMS (e.g. Wordpress) with a common internal API.
- 👩‍💻 Developer friendly: built with familiar frontend tech (JavaScript, React, Next.js).
- 🔋 Batteries included: full set of portal components out of the box e.g. catalog search, dataset showcase, blog, etc.
- 🎨 Easy to theme and customize: installable themes, use standard CSS and React+CSS tooling. Add new routes quickly.
- 🧱 Extensible: quickly extend and develop/import your own React components
- 📝 Well documented: full set of documentation plus the documentation of Next.js and Apollo.
- 📝 Well documented: full set of documentation plus the documentation of Next.js.
### For developers
@ -33,25 +53,3 @@ Built in JavaScript and React on top of the popular [Next.js](https://nextjs.com
- 🚀 Next.js framework: so everything in Next.js for free: Server Side Rendering, Static Site Generation, huge number of examples and integrations, etc.
- Server Side Rendering (SSR) => Unlimited number of pages, SEO and more whilst still using React.
- Static Site Generation (SSG) => Ultra-simple deployment, great performance, great lighthouse scores and more (good for small sites)
#### **Check out the [Portal.JS website](https://portaljs.org/) for a gallery of live portals**
___
# Docs
Access the Portal.JS documentation at:
https://portaljs.org/docs
- [Examples](https://portaljs.org/docs#examples)
# Community
If you have questions about anything related to Portal.JS, you're always welcome to ask our community on [GitHub Discussions](https://github.com/datopian/portal.js/discussions) or on our [Discord server](https://discord.gg/EeyfGrGu4U).
# Appendix
## What happened to Recline?
Portal.JS used to be Recline(JS). If you are looking for the old Recline codebase it still exists: see the [`recline` branch](https://github.com/datopian/portal.js/tree/recline). If you want context for the rename see [this issue](https://github.com/datopian/portal.js/issues/520).

View File

@ -2,7 +2,7 @@
**🚩 UPDATE April 2023: This example is now deprecated - though still works!. Please use the [new CKAN examples](https://github.com/datopian/portaljs/tree/main/examples)**
This example shows how you can build a full data portal using a CKAN Backend with a Next.JS Frontend powered by Apollo, a full fledged guide is available as a [blog post](https://portaljs.org/blog/example-ckan-2021)
This example shows how you can build a full data portal using a CKAN Backend with a Next.JS Frontend powered by Apollo, a full fledged guide is available as a [blog post](https://portaljs.com/blog/example-ckan-2021)
## Developers

View File

@ -1,7 +1,7 @@
This is a repo intended to serve as an example of a data catalog that get its data from a CKAN Instance.
```
npx create-next-app <app-name> --example https://github.com/datopian/portaljs/tree/main/examples/ckan-example
npx create-next-app <app-name> --example https://github.com/datopian/datahub/tree/main/examples/ckan-ssg
cd <app-name>
```
@ -19,7 +19,7 @@ npm run dev
Congratulations, you now have something similar to this running on `http://localhost:4200`
![](https://media.discordapp.net/attachments/1069718983604977754/1098252297726865408/image.png?width=853&height=461)
If yo go to any one of those pages by clicking on `More info` you will see something similar to this
If you go to any one of those pages by clicking on `More info` you will see something similar to this
![](https://media.discordapp.net/attachments/1069718983604977754/1098252298074988595/image.png?width=853&height=461)
## Deployment

View File

@ -1,6 +1,6 @@
This example creates a portal/showcase for a single dataset. The dataset should be a [Frictionless dataset (data package)][fd] i.e. there should be a `datapackage.json`.
[fd]: https://frictionlessdata.io/data-packages/
[fd]: https://specs.frictionlessdata.io/data-package/
## How to use

View File

@ -1,3 +1,9 @@
# PortalJS Demo replicating the FiveThirtyEight data portal
## 👉 https://fivethirtyeight.portaljs.org 👈
Here's a blog post we wrote about it: https://www.datopian.com/blog/fivethirtyeight-replica
This is a replica of the awesome data.fivethirtyeight.com using PortalJS.
You might be asking why we did that, there are three main reasons:

View File

@ -59,7 +59,7 @@ export default function Layout({ children }: { children: React.ReactNode }) {
<div className="md:flex items-center gap-x-3 text-[#3c3c3c] -mb-1 hidden">
<a
className="hover:opacity-75 transition"
href="https://portaljs.org"
href="https://portaljs.com"
>
Built with 🌀PortalJS
</a>
@ -77,7 +77,7 @@ export default function Layout({ children }: { children: React.ReactNode }) {
<li>
<a
className="hover:opacity-75 transition"
href="https://portaljs.org"
href="https://portaljs.com"
>
PortalJS
</a>

View File

@ -6,7 +6,7 @@ A `datasets.json` file is used to specify which datasets are going to be part of
The application contains an index page, which lists all the datasets specified in the `datasets.json` file, and users can see more information about each dataset, such as the list of data files in it and the README, by clicking the "info" button on the list.
You can read more about it on the [Data catalog with data on GitHub](https://portaljs.org/docs/examples/github-backed-catalog) blog post.
You can read more about it on the [Data catalog with data on GitHub](https://portaljs.com/docs/examples/github-backed-catalog) blog post.
## Demo

View File

@ -40,7 +40,7 @@ export function Datasets({ projects }) {
<Link
target="_blank"
className="underline"
href="https://portaljs.org/"
href="https://portaljs.com/"
>
🌀 PortalJS
</Link>

View File

@ -1 +1 @@
PortalJS Learn Example - https://portaljs.org/docs
PortalJS Learn Example - https://portaljs.com/docs

View File

@ -6,7 +6,7 @@ A `datasets.json` file is used to specify which datasets are going to be part of
The application contains an index page, which lists all the datasets specified in the `datasets.json` file, and users can see more information about each dataset, such as the list of data files in it and the README, by clicking the "info" button on the list.
You can read more about it on the [Data catalog with data on GitHub](https://portaljs.org/docs/examples/github-backed-catalog) blog post.
You can read more about it on the [Data catalog with data on GitHub](https://portaljs.com/docs/examples/github-backed-catalog) blog post.
## Demo

View File

@ -17,7 +17,7 @@ export default function Footer() {
</a>
</div>
<div className="flex gap-x-2 items-center mx-auto h-20">
<p className="mt-8 text-base text-slate-500 md:mt-0">Built with <a href="https://portaljs.org" target="_blank" className='text-xl font-medium'>🌀 PortalJS</a></p>
<p className="mt-8 text-base text-slate-500 md:mt-0">Built with <a href="https://portaljs.com" target="_blank" className='text-xl font-medium'>🌀 PortalJS</a></p>
</div>
</div>
</footer>

View File

@ -127,4 +127,4 @@ Based on the bar chart above we can conclude that the following 3 countries have
2. Poland - EUR ~68b.
3. Italy - EUR ~35b.
_This data story was created by using Datopian's PortalJS framework. You can learn more about the framework by visiting https://portaljs.org/_
_This data story was created by using Datopian's PortalJS framework. You can learn more about the framework by visiting https://portaljs.com/_

View File

@ -1,6 +1,6 @@
This demo data portal is designed for https://hatespeechdata.com. It catalogs datasets annotated for hate speech, online abuse, and offensive language which are useful for training a natural language processing system to detect this online abuse.
The site is built on top of [PortalJS](https://portaljs.org/). It catalogs datasets and lists of offensive keywords. It also includes static pages. All of these are stored as markdown files inside the `content` folder.
The site is built on top of [PortalJS](https://portaljs.com/). It catalogs datasets and lists of offensive keywords. It also includes static pages. All of these are stored as markdown files inside the `content` folder.
- .md files inside `content/datasets/` will appear on the dataset list section of the homepage and be searchable as well as having a individual page in `datasets/<file name>`
- .md files inside `content/keywords/` will appear on the list of offensive keywords section of the homepage as well as having a individual page in `keywords/<file name>`

View File

@ -21,7 +21,7 @@ export function Footer() {
<Container.Inner>
<div className="flex flex-col items-center justify-between gap-6 sm:flex-row">
<p className="text-sm font-medium text-zinc-800 dark:text-zinc-200">
Built with <a href='https://portaljs.org'>PortalJS 🌀</a>
Built with <a href='https://portaljs.com'>PortalJS 🌀</a>
</p>
<p className="text-sm text-zinc-400 dark:text-zinc-500">
&copy; {new Date().getFullYear()} Leon Derczynski. All rights

3520
package-lock.json generated

File diff suppressed because it is too large Load Diff

View File

@ -2,7 +2,7 @@
"name": "@portaljs/ckan",
"version": "0.1.0",
"type": "module",
"description": "https://portaljs.org",
"description": "https://portaljs.com",
"keywords": [
"data portal",
"data catalog",

View File

@ -1,9 +1,16 @@
import 'tailwindcss/tailwind.css'
import '../src/index.css'
import type { Preview } from '@storybook/react';
window.process = {
...window.process,
env:{
...window.process?.env,
}
};
const preview: Preview = {
parameters: {
actions: { argTypesRegex: '^on[A-Z].*' },

View File

@ -1,5 +1,53 @@
# @portaljs/components
## 1.2.3
### Patch Changes
- [`62dbc35d`](https://github.com/datopian/portaljs/commit/62dbc35d3b39ea7409949340214ca83a448ee999) Thanks [@olayway](https://github.com/olayway)! - LineChart: break lines at invalid / missing values (don't connect if there are gaps in values).
## 1.2.2
### Patch Changes
- [`eeb480e8`](https://github.com/datopian/datahub/commit/eeb480e8cff2d11072ace55ad683a65f54f5d07a) Thanks [@olayway](https://github.com/olayway)! - Adjust `xAxisTimeUnit` property in LineChart to allow for passing `yearmonth`.
## 1.2.1
### Patch Changes
- [`836b143a`](https://github.com/datopian/datahub/commit/836b143a3178b893b1aae3fb511d795dd3a63545) Thanks [@olayway](https://github.com/olayway)! - Fix: make tileLayerName in Map optional.
## 1.2.0
### Minor Changes
- [#1338](https://github.com/datopian/datahub/pull/1338) [`63d9e3b7`](https://github.com/datopian/datahub/commit/63d9e3b7543c38154e6989ef1cc1d694ae9fc4f8) Thanks [@olayway](https://github.com/olayway)! - Support for plotting multiple series in LineChart component.
## 1.1.0
### Minor Changes
- [#1122](https://github.com/datopian/datahub/pull/1122) [`8e349678`](https://github.com/datopian/datahub/commit/8e3496782c022b0653e07f217c6b315ba84e0e61) Thanks [@willy1989cv](https://github.com/willy1989cv)! - Map: allow users to choose a base layer setting
## 1.0.1
### Patch Changes
- [#1170](https://github.com/datopian/datahub/pull/1170) [`9ff25ed7`](https://github.com/datopian/datahub/commit/9ff25ed7c47c8c02cc078c64f76ae35d6754c508) Thanks [@lucasmbispo](https://github.com/lucasmbispo)! - iFrame component: change height
## 1.0.0
### Major Changes
- [#1103](https://github.com/datopian/datahub/pull/1103) [`48cd812a`](https://github.com/datopian/datahub/commit/48cd812a488a069a419d8ecc67f24f94d4d1d1d6) Thanks [@demenech](https://github.com/demenech)! - Components API tidying up and storybook docs improvements.
## 0.6.0
### Minor Changes
- [`a044f56e`](https://github.com/datopian/portaljs/commit/a044f56e3cbe0519ddf9d24d78b0bb7eac917e1c) Thanks [@luccasmmg](https://github.com/luccasmmg)! - Added plotly components
## 0.5.10
### Patch Changes

View File

@ -1,7 +1,7 @@
# PortalJS React Components
**Storybook:** https://storybook.portaljs.org
**Docs**: https://portaljs.org/docs
**Docs**: https://portaljs.com/opensource
## Usage

View File

@ -1,8 +1,8 @@
{
"name": "@portaljs/components",
"version": "0.5.10",
"version": "1.2.3",
"type": "module",
"description": "https://portaljs.org",
"description": "https://portaljs.com",
"keywords": [
"data portal",
"data catalog",
@ -40,11 +40,13 @@
"ol": "^7.4.0",
"papaparse": "^5.4.1",
"pdfjs-dist": "2.15.349",
"plotly.js": "^2.30.1",
"postcss-url": "^10.1.3",
"react": "^18.2.0",
"react-dom": "^18.2.0",
"react-hook-form": "^7.43.9",
"react-leaflet": "^4.2.1",
"react-plotly.js": "^2.6.0",
"react-query": "^3.39.3",
"react-vega": "^7.6.0",
"vega": "5.25.0",

View File

@ -7,7 +7,12 @@ export function Catalog({
datasets,
facets,
}: {
datasets: any[];
datasets: {
_id: string | number;
metadata: { title: string; [k: string]: string | number };
url_path: string;
[k: string]: any;
}[];
facets: string[];
}) {
const [indexFilter, setIndexFilter] = useState('');
@ -56,7 +61,7 @@ export function Catalog({
//Then check if the selectedValue for the given facet is included in the dataset metadata
.filter((dataset) => {
//Avoids a server rendering breakage
if (!watch() || Object.keys(watch()).length === 0) return true
if (!watch() || Object.keys(watch()).length === 0) return true;
//This will filter only the key pairs of the metadata values that were selected as facets
const datasetFacets = Object.entries(dataset.metadata).filter((entry) =>
facets.includes(entry[0])
@ -86,9 +91,7 @@ export function Catalog({
className="p-2 ml-1 text-sm shadow border border-block"
{...register(elem[0] + '.selectedValue')}
>
<option value="">
Filter by {elem[0]}
</option>
<option value="">Filter by {elem[0]}</option>
{(elem[1] as { possibleValues: string[] }).possibleValues.map(
(val) => (
<option
@ -102,10 +105,10 @@ export function Catalog({
)}
</select>
))}
<ul className='mb-5 pl-6 mt-5 list-disc'>
<ul className="mb-5 pl-6 mt-5 list-disc">
{filteredDatasets.map((dataset) => (
<li className='py-2' key={dataset._id}>
<a className='font-medium underline' href={dataset.url_path}>
<li className="py-2" key={dataset._id}>
<a className="font-medium underline" href={dataset.url_path}>
{dataset.metadata.title
? dataset.metadata.title
: dataset.url_path}
@ -116,4 +119,3 @@ export function Catalog({
</>
);
}

View File

@ -4,12 +4,14 @@ import { read, utils } from 'xlsx';
import { AgGridReact } from 'ag-grid-react';
import 'ag-grid-community/styles/ag-grid.css';
import 'ag-grid-community/styles/ag-theme-alpine.css';
import { Data } from '../types/properties';
export type ExcelProps = {
url: string;
data: Required<Pick<Data, 'url'>>;
};
export function Excel({ url }: ExcelProps) {
export function Excel({ data }: ExcelProps) {
const url = data.url;
const [isLoading, setIsLoading] = useState<boolean>(false);
const [activeSheetName, setActiveSheetName] = useState<string>();
const [workbook, setWorkbook] = useState<any>();

View File

@ -2,6 +2,7 @@ import { QueryClient, QueryClientProvider, useQuery } from 'react-query';
import Papa from 'papaparse';
import { Grid } from '@githubocto/flat-ui';
import LoadingSpinner from './LoadingSpinner';
import { Data } from '../types/properties';
const queryClient = new QueryClient();
@ -36,30 +37,25 @@ export async function parseCsv(file: string, parsingConfig): Promise<any> {
}
export interface FlatUiTableProps {
url?: string;
data?: { [key: string]: number | string }[];
rawCsv?: string;
randomId?: number;
data: Data;
uniqueId?: number;
bytes: number;
parsingConfig: any;
}
export const FlatUiTable: React.FC<FlatUiTableProps> = ({
url,
data,
rawCsv,
uniqueId,
bytes = 5132288,
parsingConfig = {},
}) => {
const randomId = Math.random();
uniqueId = uniqueId ?? Math.random();
return (
// Provide the client to your App
<QueryClientProvider client={queryClient}>
<TableInner
bytes={bytes}
url={url}
data={data}
rawCsv={rawCsv}
randomId={randomId}
uniqueId={uniqueId}
parsingConfig={parsingConfig}
/>
</QueryClientProvider>
@ -67,33 +63,32 @@ export const FlatUiTable: React.FC<FlatUiTableProps> = ({
};
const TableInner: React.FC<FlatUiTableProps> = ({
url,
data,
rawCsv,
randomId,
uniqueId,
bytes,
parsingConfig,
}) => {
if (data) {
const url = data.url;
const csv = data.csv;
const values = data.values;
if (values) {
return (
<div className="w-full" style={{ height: '500px' }}>
<Grid data={data} />
<Grid data={values} />
</div>
);
}
const { data: csvString, isLoading: isDownloadingCSV } = useQuery(
['dataCsv', url, randomId],
['dataCsv', url, uniqueId],
() => getCsv(url as string, bytes),
{ enabled: !!url }
);
const { data: parsedData, isLoading: isParsing } = useQuery(
['dataPreview', csvString, randomId],
['dataPreview', csvString, uniqueId],
() =>
parseCsv(
rawCsv ? (rawCsv as string) : (csvString as string),
parsingConfig
),
{ enabled: rawCsv ? true : !!csvString }
parseCsv(csv ? (csv as string) : (csvString as string), parsingConfig),
{ enabled: csv ? true : !!csvString }
);
if (isParsing || isDownloadingCSV)
<div className="w-full flex justify-center items-center h-[500px]">

View File

@ -1,14 +1,17 @@
import { CSSProperties } from "react";
import { CSSProperties } from 'react';
import { Data } from '../types/properties';
export interface IframeProps {
url: string;
data: Required<Pick<Data, 'url'>>;
style?: CSSProperties;
}
export function Iframe({
url, style
}: IframeProps) {
export function Iframe({ data, style }: IframeProps) {
const url = data.url;
return (
<iframe src={url} style={style ?? { width: `100%`, height: `100%` }}></iframe>
<iframe
src={url}
style={style ?? { width: `100%`, height: `600px` }}
></iframe>
);
}

View File

@ -2,35 +2,40 @@ import { useEffect, useState } from 'react';
import LoadingSpinner from './LoadingSpinner';
import { VegaLite } from './VegaLite';
import loadData from '../lib/loadData';
import { Data } from '../types/properties';
type AxisType = 'quantitative' | 'temporal';
type TimeUnit = 'year' | undefined; // or ...
type TimeUnit = 'year' | 'yearmonth' | undefined; // or ...
export type LineChartProps = {
data: Array<Array<string | number>> | string | { x: string; y: number }[];
data: Omit<Data, 'csv'>;
title?: string;
xAxis?: string;
xAxis: string;
xAxisType?: AxisType;
xAxisTimeUnit: TimeUnit;
yAxis?: string;
xAxisTimeUnit?: TimeUnit;
yAxis: string | string[];
yAxisType?: AxisType;
fullWidth?: boolean;
symbol?: string;
};
export function LineChart({
data = [],
fullWidth = false,
data,
title = '',
xAxis = 'x',
xAxis,
xAxisType = 'temporal',
xAxisTimeUnit = 'year', // TODO: defaults to undefined would probably work better... keeping it as it's for compatibility purposes
yAxis = 'y',
yAxis,
yAxisType = 'quantitative',
symbol,
}: LineChartProps) {
const url = data.url;
const values = data.values;
const [isLoading, setIsLoading] = useState<boolean>(false);
// By default, assumes data is an Array...
const [specData, setSpecData] = useState<any>({ name: 'table' });
const isMultiYAxis = Array.isArray(yAxis);
const spec = {
$schema: 'https://vega.github.io/schema/vega-lite/v5.json',
@ -42,8 +47,17 @@ export function LineChart({
color: 'black',
strokeWidth: 1,
tooltip: true,
invalid: "break-paths"
},
data: specData,
...(isMultiYAxis
? {
transform: [
{ fold: yAxis, as: ['key', 'value'] },
{ filter: 'datum.value != null && datum.value != ""' }
],
}
: {}),
selection: {
grid: {
type: 'interval',
@ -57,20 +71,35 @@ export function LineChart({
type: xAxisType,
},
y: {
field: yAxis,
field: isMultiYAxis ? 'value' : yAxis,
type: yAxisType,
},
...(symbol
? {
color: {
field: symbol,
type: 'nominal',
},
}
: {}),
...(isMultiYAxis
? {
color: {
field: 'key',
type: 'nominal',
},
}
: {}),
},
} as any;
useEffect(() => {
// If data is string, assume it's a URL
if (typeof data === 'string') {
if (url) {
setIsLoading(true);
// Manualy loading the data allows us to do other kinds
// of stuff later e.g. load a file partially
loadData(data).then((res: any) => {
loadData(url).then((res: any) => {
setSpecData({ values: res, format: { type: 'csv' } });
setIsLoading(false);
});
@ -78,12 +107,8 @@ export function LineChart({
}, []);
var vegaData = {};
if (Array.isArray(data)) {
var dataObj;
dataObj = data.map((r) => {
return { x: r[0], y: r[1] };
});
vegaData = { table: dataObj };
if (values) {
vegaData = { table: values };
}
return isLoading ? (
@ -91,6 +116,6 @@ export function LineChart({
<LoadingSpinner />
</div>
) : (
<VegaLite fullWidth={fullWidth} data={vegaData} spec={spec} />
<VegaLite data={vegaData} spec={spec} />
);
}

View File

@ -2,6 +2,7 @@ import { CSSProperties, useEffect, useState } from 'react';
import LoadingSpinner from './LoadingSpinner';
import loadData from '../lib/loadData';
import chroma from 'chroma-js';
import { GeospatialData } from '../types/properties';
import {
MapContainer,
TileLayer,
@ -11,32 +12,67 @@ import {
import 'leaflet/dist/leaflet.css';
import * as L from 'leaflet';
import providers from '../lib/tileLayerPresets';
type VariantKeys<T> = T extends { variants: infer V }
? {
[K in keyof V]: K extends string
? `${K}` | `${K}.${VariantKeys<V[K]>}`
: never;
}[keyof V]
: never;
type ProviderVariantKeys<T> = {
[K in keyof T]: K extends string
? `${K}` | `${K}.${VariantKeys<T[K]>}`
: never;
}[keyof T];
type TileLayerPreset = ProviderVariantKeys<typeof providers> | 'custom';
interface TileLayerSettings extends L.TileLayerOptions {
url?: string;
variant?: string | any;
}
export type MapProps = {
tileLayerName?: TileLayerPreset;
tileLayerOptions?: TileLayerSettings | undefined;
layers: {
data: string | GeoJSON.GeoJSON;
data: GeospatialData;
name: string;
colorScale?: {
starting: string;
ending: string;
};
tooltip?:
| {
propNames: string[];
}
| boolean;
_id?: number;
| {
propNames: string[];
}
| boolean;
}[];
title?: string;
center?: { latitude: number | undefined; longitude: number | undefined };
zoom?: number;
style?: CSSProperties;
autoZoomConfiguration?: {
layerName: string
}
layerName: string;
};
};
const tileLayerDefaultName = process?.env
.NEXT_PUBLIC_MAP_TILE_LAYER_NAME as TileLayerPreset;
const tileLayerDefaultOptions = Object.keys(process?.env)
.filter((key) => key.startsWith('NEXT_PUBLIC_MAP_TILE_LAYER_OPTION_'))
.reduce((obj, key) => {
obj[key.split('NEXT_PUBLIC_MAP_TILE_LAYER_OPTION_')[1]] = process.env[key];
return obj;
}, {}) as TileLayerSettings;
export function Map({
tileLayerName = tileLayerDefaultName || 'OpenStreetMap',
tileLayerOptions,
layers = [
{
data: null,
@ -54,19 +90,110 @@ export function Map({
const [isLoading, setIsLoading] = useState<boolean>(false);
const [layersData, setLayersData] = useState<any>([]);
/*
tileLayerDefaultOptions
extract all environment variables thats starts with NEXT_PUBLIC_MAP_TILE_LAYER_OPTION_.
the variables names are the same as the TileLayer object properties:
- NEXT_PUBLIC_MAP_TILE_LAYER_OPTION_url:
- NEXT_PUBLIC_MAP_TILE_LAYER_OPTION_attribution
- NEXT_PUBLIC_MAP_TILE_LAYER_OPTION_accessToken
- NEXT_PUBLIC_MAP_TILE_LAYER_OPTION_id
- NEXT_PUBLIC_MAP_TILE_LAYER_OPTION_ext
- NEXT_PUBLIC_MAP_TILE_LAYER_OPTION_bounds
- NEXT_PUBLIC_MAP_TILE_LAYER_OPTION_maxZoom
- NEXT_PUBLIC_MAP_TILE_LAYER_OPTION_minZoom
see TileLayerOptions inteface
*/
//tileLayerData prioritizes properties passed through component over those passed through .env variables
tileLayerOptions = Object.assign(tileLayerDefaultOptions, tileLayerOptions);
let provider = {
url: tileLayerOptions.url,
options: tileLayerOptions,
};
if (tileLayerName != 'custom') {
var parts = tileLayerName.split('.');
var providerName = parts[0];
var variantName: string = parts[1];
//make sure to declare a variant if url depends on a variant: assume first
if (providers[providerName].url?.includes('{variant}') && !variantName)
variantName = Object.keys(providers[providerName].variants)[0];
if (!providers[providerName]) {
throw 'No such provider (' + providerName + ')';
}
provider = {
url: providers[providerName].url,
options: providers[providerName].options,
};
// overwrite values in provider from variant.
if (variantName && 'variants' in providers[providerName]) {
if (!(variantName in providers[providerName].variants)) {
throw 'No such variant of ' + providerName + ' (' + variantName + ')';
}
var variant = providers[providerName].variants[variantName];
var variantOptions;
if (typeof variant === 'string') {
variantOptions = {
variant: variant,
};
} else {
variantOptions = variant.options;
}
provider = {
url: variant.url || provider.url,
options: L.Util.extend({}, provider.options, variantOptions),
};
}
var attributionReplacer = function (attr) {
if (attr.indexOf('{attribution.') === -1) {
return attr;
}
return attr.replace(
/\{attribution.(\w*)\}/g,
function (match: any, attributionName: string) {
match;
return attributionReplacer(
providers[attributionName].options.attribution
);
}
);
};
provider.options.attribution = attributionReplacer(
provider.options.attribution
);
}
var tileLayerData = L.Util.extend(
{
url: provider.url,
},
provider.options,
tileLayerOptions
);
useEffect(() => {
const loadDataPromises = layers.map(async (layer) => {
const url = layer.data.url;
const geojson = layer.data.geojson;
let layerData: any;
if (typeof layer.data === 'string') {
if (url) {
// If "data" is string, assume it's a URL
setIsLoading(true);
layerData = await loadData(layer.data).then((res: any) => {
layerData = await loadData(url).then((res: any) => {
return JSON.parse(res);
});
} else {
// Else, expect raw GeoJSON
layerData = layer.data;
layerData = geojson;
}
if (layer.colorScale) {
@ -98,6 +225,7 @@ export function Map({
</div>
) : (
<MapContainer
key={layersData}
center={[center.latitude, center.longitude]}
zoom={zoom}
scrollWheelZoom={false}
@ -111,23 +239,23 @@ export function Map({
// Create the title box
var info = new L.Control() as any;
info.onAdd = function() {
info.onAdd = function () {
this._div = L.DomUtil.create('div', 'info');
this.update();
return this._div;
};
info.update = function() {
info.update = function () {
this._div.innerHTML = `<h4 style="font-weight: 600; background: #f9f9f9; padding: 5px; border-radius: 5px; color: #464646;">${title}</h4>`;
};
if (title) info.addTo(map.target);
if(!autoZoomConfiguration) return;
if (!autoZoomConfiguration) return;
let layerToZoomBounds = L.latLngBounds(L.latLng(0, 0), L.latLng(0, 0));
layers.forEach((layer) => {
if(layer.name === autoZoomConfiguration.layerName) {
if (layer.name === autoZoomConfiguration.layerName) {
const data = layersData.find(
(layerData) => layerData.name === layer.name
)?.data;
@ -142,10 +270,8 @@ export function Map({
map.target.fitBounds(layerToZoomBounds);
}}
>
<TileLayer
attribution='&copy; <a href="https://www.openstreetmap.org/copyright">OpenStreetMap</a> contributors'
url="https://{s}.tile.openstreetmap.org/{z}/{x}/{y}.png"
/>
{tileLayerData.url && <TileLayer {...tileLayerData} />}
<LayersControl position="bottomright">
{layers.map((layer) => {
const data = layersData.find(

View File

@ -1,22 +1,24 @@
// Core viewer
import { Viewer, Worker, SpecialZoomLevel } from '@react-pdf-viewer/core';
import { defaultLayoutPlugin } from '@react-pdf-viewer/default-layout';
import { Data } from '../types/properties';
// Import styles
import '@react-pdf-viewer/core/lib/styles/index.css';
import '@react-pdf-viewer/default-layout/lib/styles/index.css';
export interface PdfViewerProps {
url: string;
data: Required<Pick<Data, 'url'>>;
layout: boolean;
parentClassName?: string;
}
export function PdfViewer({
url,
data,
layout = false,
parentClassName,
parentClassName = 'h-screen',
}: PdfViewerProps) {
const url = data.url;
const defaultLayoutPluginInstance = defaultLayoutPlugin();
return (
<Worker workerUrl="https://unpkg.com/pdfjs-dist@2.15.349/build/pdf.worker.js">

View File

@ -0,0 +1,9 @@
import Plot, { PlotParams } from "react-plotly.js";
export const Plotly: React.FC<PlotParams> = (props) => {
return (
<div>
<Plot {...props} />
</div>
);
};

View File

@ -0,0 +1,153 @@
import { QueryClient, QueryClientProvider, useQuery } from 'react-query';
import { Plotly } from './Plotly';
import Papa, { ParseConfig } from 'papaparse';
import LoadingSpinner from './LoadingSpinner';
import { Data } from '../types/properties';
const queryClient = new QueryClient();
async function getCsv(url: string, bytes: number) {
const response = await fetch(url, {
headers: {
Range: `bytes=0-${bytes}`,
},
});
const data = await response.text();
return data;
}
async function parseCsv(
file: string,
parsingConfig: ParseConfig
): Promise<any> {
return new Promise((resolve, reject) => {
Papa.parse(file, {
...parsingConfig,
header: true,
dynamicTyping: true,
skipEmptyLines: true,
transform: (value: string): string => {
return value.trim();
},
complete: (results: any) => {
return resolve(results);
},
error: (error: any) => {
return reject(error);
},
});
});
}
export interface PlotlyBarChartProps {
data: Data;
uniqueId?: number;
bytes?: number;
parsingConfig?: ParseConfig;
xAxis: string;
yAxis: string;
// TODO: commented out because this doesn't work. I believe
// this would only make any difference on charts with multiple
// traces.
// lineLabel?: string;
title?: string;
}
export const PlotlyBarChart: React.FC<PlotlyBarChartProps> = ({
data,
bytes = 5132288,
parsingConfig = {},
xAxis,
yAxis,
// lineLabel,
title = '',
}) => {
const uniqueId = Math.random();
return (
// Provide the client to your App
<QueryClientProvider client={queryClient}>
<PlotlyBarChartInner
data={data}
uniqueId={uniqueId}
bytes={bytes}
parsingConfig={parsingConfig}
xAxis={xAxis}
yAxis={yAxis}
// lineLabel={lineLabel ?? yAxis}
title={title}
/>
</QueryClientProvider>
);
};
const PlotlyBarChartInner: React.FC<PlotlyBarChartProps> = ({
data,
uniqueId,
bytes,
parsingConfig,
xAxis,
yAxis,
// lineLabel,
title,
}) => {
if (data.values) {
return (
<div className="w-full" style={{ height: '500px' }}>
<Plotly
layout={{
title,
}}
data={[
{
x: data.values.map((d) => d[xAxis]),
y: data.values.map((d) => d[yAxis]),
type: 'bar',
// name: lineLabel,
},
]}
/>
</div>
);
}
const { data: csvString, isLoading: isDownloadingCSV } = useQuery(
['dataCsv', data.url, uniqueId],
() => getCsv(data.url as string, bytes ?? 5132288),
{ enabled: !!data.url }
);
const { data: parsedData, isLoading: isParsing } = useQuery(
['dataPreview', csvString, uniqueId],
() =>
parseCsv(
data.csv ? (data.csv as string) : (csvString as string),
parsingConfig ?? {}
),
{ enabled: data.csv ? true : !!csvString }
);
if (isParsing || isDownloadingCSV)
<div className="w-full flex justify-center items-center h-[500px]">
<LoadingSpinner />
</div>;
if (parsedData)
return (
<div className="w-full" style={{ height: '500px' }}>
<Plotly
layout={{
title,
}}
data={[
{
x: parsedData.data.map((d: any) => d[xAxis]),
y: parsedData.data.map((d: any) => d[yAxis]),
type: 'bar',
// name: lineLabel, TODO: commented out because this doesn't work
},
]}
/>
</div>
);
return (
<div className="w-full flex justify-center items-center h-[500px]">
<LoadingSpinner />
</div>
);
};

View File

@ -0,0 +1,155 @@
import { QueryClient, QueryClientProvider, useQuery } from 'react-query';
import { Plotly } from './Plotly';
import Papa, { ParseConfig } from 'papaparse';
import LoadingSpinner from './LoadingSpinner';
import { Data } from '../types/properties';
const queryClient = new QueryClient();
async function getCsv(url: string, bytes: number) {
const response = await fetch(url, {
headers: {
Range: `bytes=0-${bytes}`,
},
});
const data = await response.text();
return data;
}
async function parseCsv(
file: string,
parsingConfig: ParseConfig
): Promise<any> {
return new Promise((resolve, reject) => {
Papa.parse(file, {
...parsingConfig,
header: true,
dynamicTyping: true,
skipEmptyLines: true,
transform: (value: string): string => {
return value.trim();
},
complete: (results: any) => {
return resolve(results);
},
error: (error: any) => {
return reject(error);
},
});
});
}
export interface PlotlyLineChartProps {
data: Data;
bytes?: number;
parsingConfig?: ParseConfig;
xAxis: string;
yAxis: string;
lineLabel?: string;
title?: string;
uniqueId?: number;
}
export const PlotlyLineChart: React.FC<PlotlyLineChartProps> = ({
data,
bytes = 5132288,
parsingConfig = {},
xAxis,
yAxis,
lineLabel,
title = '',
uniqueId,
}) => {
uniqueId = uniqueId ?? Math.random();
return (
// Provide the client to your App
<QueryClientProvider client={queryClient}>
<LineChartInner
data={data}
uniqueId={uniqueId}
bytes={bytes}
parsingConfig={parsingConfig}
xAxis={xAxis}
yAxis={yAxis}
lineLabel={lineLabel ?? yAxis}
title={title}
/>
</QueryClientProvider>
);
};
const LineChartInner: React.FC<PlotlyLineChartProps> = ({
data,
uniqueId,
bytes,
parsingConfig,
xAxis,
yAxis,
lineLabel,
title,
}) => {
const values = data.values;
const url = data.url;
const csv = data.csv;
if (values) {
return (
<div className="w-full" style={{ height: '500px' }}>
<Plotly
layout={{
title,
}}
data={[
{
x: values.map((d) => d[xAxis]),
y: values.map((d) => d[yAxis]),
mode: 'lines',
name: lineLabel,
},
]}
/>
</div>
);
}
const { data: csvString, isLoading: isDownloadingCSV } = useQuery(
['dataCsv', url, uniqueId],
() => getCsv(url as string, bytes ?? 5132288),
{ enabled: !!url }
);
const { data: parsedData, isLoading: isParsing } = useQuery(
['dataPreview', csvString, uniqueId],
() =>
parseCsv(
csv ? (csv as string) : (csvString as string),
parsingConfig ?? {}
),
{ enabled: csv ? true : !!csvString }
);
if (isParsing || isDownloadingCSV)
<div className="w-full flex justify-center items-center h-[500px]">
<LoadingSpinner />
</div>;
if (parsedData)
return (
<div className="w-full" style={{ height: '500px' }}>
<Plotly
layout={{
title,
}}
data={[
{
x: parsedData.data.map((d: any) => d[xAxis]),
y: parsedData.data.map((d: any) => d[yAxis]),
mode: 'lines',
name: lineLabel,
},
]}
/>
</div>
);
return (
<div className="w-full flex justify-center items-center h-[500px]">
<LoadingSpinner />
</div>
);
};

View File

@ -1,6 +1,7 @@
// Wrapper for the Vega component
import { Vega as VegaOg } from "react-vega";
import { VegaProps } from "react-vega/lib/Vega";
export function Vega(props) {
export function Vega(props: VegaProps) {
return <VegaOg {...props} />;
}

View File

@ -1,8 +1,9 @@
// Wrapper for the Vega Lite component
import { VegaLite as VegaLiteOg } from "react-vega";
import applyFullWidthDirective from "../lib/applyFullWidthDirective";
import { VegaLite as VegaLiteOg } from 'react-vega';
import { VegaLiteProps } from 'react-vega/lib/VegaLite';
import applyFullWidthDirective from '../lib/applyFullWidthDirective';
export function VegaLite(props) {
export function VegaLite(props: VegaLiteProps) {
const Component = applyFullWidthDirective({ Component: VegaLiteOg });
return <Component {...props} />;

View File

@ -1,12 +1,17 @@
export * from './components/Table';
export * from './components/Catalog';
export * from './components/LineChart';
export * from './components/Vega';
export * from './components/VegaLite';
export * from './components/FlatUiTable';
export * from './components/OpenLayers/OpenLayers';
export * from './components/Map';
export * from './components/PdfViewer';
export * from "./components/Excel";
export * from "./components/BucketViewer";
export * from "./components/Iframe";
export * from "./components/Plotly";
export * from "./components/PlotlyLineChart";
export * from "./components/PlotlyBarChart";
// NOTE: components that are hidden for now
// TODO: deprecate those components?
// export * from './components/Table';
// export * from "./components/BucketViewer";
// export * from './components/OpenLayers/OpenLayers';

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,18 @@
/*
* All components should use this interface for
* its data property.
* Based on vega.
*
*/
type URL = string; // Just in case we want to transform it into an object with configurations
export interface Data {
url?: URL;
values?: { [key: string]: number | string | null | undefined }[];
csv?: string;
}
export interface GeospatialData {
url?: URL;
geojson?: GeoJSON.GeoJSON;
}

View File

@ -1,3 +1,6 @@
// NOTE: this component was renamed with .bkp so that it's hidden
// from the Storybook app
import { type Meta, type StoryObj } from '@storybook/react';
import {

View File

@ -10,11 +10,14 @@ const meta: Meta = {
argTypes: {
datasets: {
description:
'Lists of datasets to be displayed in the list, will usually be automatically available',
"Array of items to be displayed on the searchable list. Must have the following properties: \n\n \
`_id`: item's unique id \n\n \
`url_path`: href of the item \n\n \
`metadata`: object with a `title` property, that will be displayed as the title of the item, together with any other custom fields that might or not be faceted.",
},
facets: {
description:
'List of frontmatter fields that should be used as filters, needs to match exactly with the field name',
"Array of strings, which are name of properties in the datasets' `metadata`, which are going to be faceted.",
},
},
};
@ -31,99 +34,35 @@ export const WithoutFacets: Story = {
{
_id: '07026b22d49916754df1dc8ffb9ccd1c31878aae',
url_path: 'dataset-4',
file_path: 'content/dataset-4/index.md',
metadata: {
title: 'Detecting Abusive Albanian',
'link-to-publication': 'https://arxiv.org/abs/2107.13592',
'link-to-data': 'https://doi.org/10.6084/m9.figshare.19333298.v1',
'task-description':
'Hierarchical (offensive/not; untargeted/targeted; person/group/other)',
'details-of-task':
'Detect and categorise abusive language in social media data',
'size-of-dataset': 11874,
'percentage-abusive': 13.2,
language: 'Albanian',
'level-of-annotation': ['Posts'],
platform: ['Instagram', 'Youtube'],
medium: ['Text'],
reference:
'Nurce, E., Keci, J., Derczynski, L., 2021. Detecting Abusive Albanian. arXiv:2107.13592',
},
},
{
_id: '42c86cf3c4fbbab11d91c2a7d6dcb8f750bc4e19',
url_path: 'dataset-1',
file_path: 'content/dataset-1/index.md',
metadata: {
title: 'AbuseEval v1.0',
'link-to-publication':
'http://www.lrec-conf.org/proceedings/lrec2020/pdf/2020.lrec-1.760.pdf',
'link-to-data': 'https://github.com/tommasoc80/AbuseEval',
'task-description':
'Explicitness annotation of offensive and abusive content',
'details-of-task':
'Enriched versions of the OffensEval/OLID dataset with the distinction of explicit/implicit offensive messages and the new dimension for abusive messages. Labels for offensive language: EXPLICIT, IMPLICT, NOT; Labels for abusive language: EXPLICIT, IMPLICT, NOTABU',
'size-of-dataset': 14100,
'percentage-abusive': 20.75,
language: 'English',
'level-of-annotation': ['Tweets'],
platform: ['Twitter'],
medium: ['Text'],
reference:
'Caselli, T., Basile, V., Jelena, M., Inga, K., and Michael, G. 2020. "I feel offended, dont be abusive! implicit/explicit messages in offensive and abusive language". The 12th Language Resources and Evaluation Conference (pp. 6193-6202). European Language Resources Association.',
},
},
{
_id: '80001dd32a752421fdcc64e91fbd237dc31d6bb3',
url_path: 'dataset-2',
file_path: 'content/dataset-2/index.md',
metadata: {
title:
'Abusive Language Detection on Arabic Social Media (Al Jazeera)',
'link-to-publication': 'https://www.aclweb.org/anthology/W17-3008',
'link-to-data':
'http://alt.qcri.org/~hmubarak/offensive/AJCommentsClassification-CF.xlsx',
'task-description':
'Ternary (Obscene, Offensive but not obscene, Clean)',
'details-of-task': 'Incivility',
'size-of-dataset': 32000,
'percentage-abusive': 0.81,
language: 'Arabic',
'level-of-annotation': ['Posts'],
platform: ['AlJazeera'],
medium: ['Text'],
reference:
'Mubarak, H., Darwish, K. and Magdy, W., 2017. Abusive Language Detection on Arabic Social Media. In: Proceedings of the First Workshop on Abusive Language Online. Vancouver, Canada: Association for Computational Linguistics, pp.52-56.',
},
},
{
_id: '96649d05d8193f4333b10015af76c6562971bd8c',
url_path: 'dataset-3',
file_path: 'content/dataset-3/index.md',
metadata: {
title: 'CoRAL: a Context-aware Croatian Abusive Language Dataset',
'link-to-publication':
'https://aclanthology.org/2022.findings-aacl.21/',
'link-to-data':
'https://github.com/shekharRavi/CoRAL-dataset-Findings-of-the-ACL-AACL-IJCNLP-2022',
'task-description':
'Multi-class based on context dependency categories (CDC)',
'details-of-task': 'Detectioning CDC from abusive comments',
'size-of-dataset': 2240,
'percentage-abusive': 100,
language: 'Croatian',
'level-of-annotation': ['Posts'],
platform: ['Posts'],
medium: ['Newspaper Comments'],
reference:
'Ravi Shekhar, Mladen Karan and Matthew Purver (2022). CoRAL: a Context-aware Croatian Abusive Language Dataset. Findings of the ACL: AACL-IJCNLP.',
},
},
],
},
};
;
export const WithFacets: Story = {
name: 'Catalog with facets',
args: {
@ -131,7 +70,6 @@ export const WithFacets: Story = {
{
_id: '07026b22d49916754df1dc8ffb9ccd1c31878aae',
url_path: 'dataset-4',
file_path: 'content/dataset-4/index.md',
metadata: {
title: 'Detecting Abusive Albanian',
'link-to-publication': 'https://arxiv.org/abs/2107.13592',
@ -220,7 +158,6 @@ export const WithFacets: Story = {
},
},
],
facets: ['language', 'platform']
facets: ['language', 'platform'],
},
};
;

View File

@ -4,13 +4,13 @@ import { Excel, ExcelProps } from '../src/components/Excel';
// More on how to set up stories at: https://storybook.js.org/docs/react/writing-stories/introduction
const meta: Meta = {
title: 'Components/Excel',
title: 'Components/Tabular/Excel',
component: Excel,
tags: ['autodocs'],
argTypes: {
url: {
data: {
description:
'Url of the file to be displayed e.g.: "https://url.to/data.csv"',
'Object with a `url` property pointing to the Excel file to be displayed, e.g.: `{ url: "https://url.to/data.csv" }`',
},
},
};
@ -22,13 +22,17 @@ type Story = StoryObj<ExcelProps>;
export const SingleSheet: Story = {
name: 'Excel file with just one sheet',
args: {
url: 'https://sheetjs.com/pres.xlsx',
data: {
url: 'https://sheetjs.com/pres.xlsx',
},
},
};
export const MultipleSheet: Story = {
name: 'Excel file with multiple sheets',
args: {
url: 'https://storage.portaljs.org/IC-Gantt-Chart-Project-Template-8857.xlsx',
data: {
url: 'https://storage.portaljs.org/IC-Gantt-Chart-Project-Template-8857.xlsx',
},
},
};

View File

@ -4,29 +4,31 @@ import { FlatUiTable, FlatUiTableProps } from '../src/components/FlatUiTable';
// More on how to set up stories at: https://storybook.js.org/docs/react/writing-stories/introduction
const meta: Meta = {
title: 'Components/FlatUiTable',
title: 'Components/Tabular/FlatUiTable',
component: FlatUiTable,
tags: ['autodocs'],
argTypes: {
data: {
description:
'Data to be displayed in the table, must be setup as an array of key value pairs',
},
csv: {
description: 'CSV data as string.',
},
url: {
description:
'Fetch the data from a CSV file remotely. only the first 5MB of data will be displayed',
'Data to be displayed. \n\n \
Must be an object with one of the following properties: `url`, `values` or `csv` \n\n \
`url`: URL pointing to a CSV file. \n\n \
`values`: array of objects. \n\n \
`csv`: string with valid CSV. \n\n \
',
},
bytes: {
description:
'Fetch the data from a CSV file remotely. only the first <bytes> of data will be displayed',
'Fetch the data from a CSV file remotely. Only the first <bytes> of data will be displayed. Defaults to 5MB.',
},
parsingConfig: {
description:
'Configuration for parsing the CSV data. See https://www.papaparse.com/docs#config for more details',
},
uniqueId: {
description:
'Provide a unique ID to help with cache revalidation of the fetched data.',
},
},
};
@ -36,34 +38,40 @@ type Story = StoryObj<FlatUiTableProps>;
// More on writing stories with args: https://storybook.js.org/docs/react/writing-stories/args
export const FromColumnsAndData: Story = {
name: 'Table data',
name: 'Table from array or objects',
args: {
data: [
{ id: 1, lastName: 'Snow', firstName: 'Jon', age: 35 },
{ id: 2, lastName: 'Lannister', firstName: 'Cersei', age: 42 },
{ id: 3, lastName: 'Lannister', firstName: 'Jaime', age: 45 },
{ id: 4, lastName: 'Stark', firstName: 'Arya', age: 16 },
{ id: 7, lastName: 'Clifford', firstName: 'Ferrara', age: 44 },
{ id: 8, lastName: 'Frances', firstName: 'Rossini', age: 36 },
{ id: 9, lastName: 'Roxie', firstName: 'Harvey', age: 65 },
],
data: {
values: [
{ id: 1, lastName: 'Snow', firstName: 'Jon', age: 35 },
{ id: 2, lastName: 'Lannister', firstName: 'Cersei', age: 42 },
{ id: 3, lastName: 'Lannister', firstName: 'Jaime', age: 45 },
{ id: 4, lastName: 'Stark', firstName: 'Arya', age: 16 },
{ id: 7, lastName: 'Clifford', firstName: 'Ferrara', age: 44 },
{ id: 8, lastName: 'Frances', firstName: 'Rossini', age: 36 },
{ id: 9, lastName: 'Roxie', firstName: 'Harvey', age: 65 },
],
},
},
};
export const FromRawCSV: Story = {
name: 'Table from raw CSV',
name: 'Table from inline CSV',
args: {
rawCsv: `
data: {
csv: `
Year,Temp Anomaly
1850,-0.418
2020,0.923
`,
},
},
};
export const FromURL: Story = {
name: 'Table from URL',
args: {
url: 'https://storage.openspending.org/alberta-budget/__os_imported__alberta_total.csv',
data: {
url: 'https://storage.openspending.org/alberta-budget/__os_imported__alberta_total.csv',
},
},
};

View File

@ -3,17 +3,17 @@ import { type Meta, type StoryObj } from '@storybook/react';
import { Iframe, IframeProps } from '../src/components/Iframe';
const meta: Meta = {
title: 'Components/Iframe',
title: 'Components/Embedding/Iframe',
component: Iframe,
tags: ['autodocs'],
argTypes: {
url: {
data: {
description:
'Page to display inside of the component',
'Object with a `url` property pointing to the page to be embeded.',
},
style: {
description:
'Style of the component',
'Style object of the component. See example at https://react.dev/learn#displaying-data. Defaults to `{ width: "100%", height: "100%" }`',
},
},
};
@ -25,7 +25,9 @@ type Story = StoryObj<IframeProps>;
export const Normal: Story = {
name: 'Iframe',
args: {
url: 'https://app.powerbi.com/view?r=eyJrIjoiYzBmN2Q2MzYtYzE3MS00ODkxLWE5OWMtZTQ2MjBlMDljMDk4IiwidCI6Ijk1M2IwZjgzLTFjZTYtNDVjMy04MmM5LTFkODQ3ZTM3MjMzOSIsImMiOjh9',
style: {width: `100%`, height: `100%`}
data: {
url: 'https://app.powerbi.com/view?r=eyJrIjoiYzBmN2Q2MzYtYzE3MS00ODkxLWE5OWMtZTQ2MjBlMDljMDk4IiwidCI6Ijk1M2IwZjgzLTFjZTYtNDVjMy04MmM5LTFkODQ3ZTM3MjMzOSIsImMiOjh9',
},
style: { width: `100%`, height: `600px` },
},
};

View File

@ -4,6 +4,6 @@ import { Meta } from '@storybook/blocks';
# Welcome to the PortalJS components guide
**Official Website:** [portaljs.org](https://portaljs.org)
**Docs:** [portaljs.org/docs](https://portaljs.org/docs)
**Official Website:** [portaljs.com](https://portaljs.com)
**Docs:** [portaljs.com/opensource](https://portaljs.com/opensource)
**GitHub:** [github.com/datopian/portaljs](https://github.com/datopian/portaljs)

View File

@ -4,37 +4,40 @@ import { LineChart, LineChartProps } from '../src/components/LineChart';
// More on how to set up stories at: https://storybook.js.org/docs/react/writing-stories/introduction
const meta: Meta = {
title: 'Components/LineChart',
title: 'Components/Charts/LineChart',
component: LineChart,
tags: ['autodocs'],
argTypes: {
data: {
description:
'Data to be displayed.\n\n E.g.: [["1990", 1], ["1991", 2]] \n\nOR\n\n "https://url.to/data.csv"',
'Data to be displayed. \n\n \
Must be an object with one of the following properties: `url` or `values` \n\n \
`url`: URL pointing to a CSV file. \n\n \
`values`: array of objects \n\n',
},
title: {
description: 'Title to display on the chart. Optional.',
description: 'Title to display on the chart.',
},
xAxis: {
description:
'Name of the X axis on the data. Required when the "data" parameter is an URL.',
'Name of the column header or object property that represents the X-axis on the data.',
},
xAxisType: {
description: 'Type of the X axis',
description: 'Type of the X-axis.',
},
xAxisTimeUnit: {
description: 'Time unit of the X axis (optional)',
description: 'Time unit of the X-axis, in case its type is `temporal.`',
},
yAxis: {
description:
'Name of the Y axis on the data. Required when the "data" parameter is an URL.',
'Name of the column headers or object properties that represent the Y-axis on the data.',
},
yAxisType: {
description: 'Type of the Y axis',
description: 'Type of the Y-axis',
},
fullWidth: {
symbol: {
description:
'Whether the component should be rendered as full bleed or not',
'Name of the column header or object property that represents a series for multiple series.',
},
},
};
@ -47,22 +50,112 @@ type Story = StoryObj<LineChartProps>;
export const FromDataPoints: Story = {
name: 'Line chart from array of data points',
args: {
data: [
['1850', -0.41765878],
['1851', -0.2333498],
['1852', -0.22939907],
['1853', -0.27035445],
['1854', -0.29163003],
],
data: {
values: [
{ year: '1850', value: -0.41765878 },
{ year: '1851', value: -0.2333498 },
{ year: '1852', value: -0.22939907 },
{ year: '1853', value: -0.27035445 },
{ year: '1854', value: -0.29163003 },
],
},
xAxis: 'year',
yAxis: 'value',
},
};
export const MultiSeries: Story = {
name: 'Line chart with multiple series (specifying symbol)',
args: {
data: {
values: [
{ year: '1850', value: -0.41765878, z: 'A' },
{ year: '1851', value: -0.2333498, z: 'A' },
{ year: '1852', value: -0.22939907, z: 'A' },
{ year: '1853', value: -0.27035445, z: 'A' },
{ year: '1854', value: -0.29163003, z: 'A' },
{ year: '1850', value: -0.42993882, z: 'B' },
{ year: '1851', value: -0.30365549, z: 'B' },
{ year: '1852', value: -0.27905189, z: 'B' },
{ year: '1853', value: -0.22939704, z: 'B' },
{ year: '1854', value: -0.25688013, z: 'B' },
{ year: '1850', value: -0.4757164, z: 'C' },
{ year: '1851', value: -0.41971018, z: 'C' },
{ year: '1852', value: -0.40724799, z: 'C' },
{ year: '1853', value: -0.45049156, z: 'C' },
{ year: '1854', value: -0.41896583, z: 'C' },
],
},
xAxis: 'year',
yAxis: 'value',
symbol: 'z',
},
};
export const MultiColumns: Story = {
name: 'Line chart with multiple series (with multiple columns)',
args: {
data: {
values: [
{ year: '1850', A: -0.41765878, B: -0.42993882, C: -0.4757164 },
{ year: '1851', A: -0.2333498, B: -0.30365549, C: -0.41971018 },
{ year: '1852', A: -0.22939907, B: -0.27905189, C: -0.40724799 },
{ year: '1853', A: -0.27035445, B: -0.22939704, C: -0.45049156 },
{ year: '1854', A: -0.29163003, B: -0.25688013, C: -0.41896583 },
],
},
xAxis: 'year',
yAxis: ['A', 'B', 'C'],
},
};
export const FromURL: Story = {
name: 'Line chart from URL',
args: {
data: {
url: 'https://raw.githubusercontent.com/datasets/oil-prices/main/data/wti-year.csv',
},
title: 'Oil Price x Year',
data: 'https://raw.githubusercontent.com/datasets/oil-prices/main/data/wti-year.csv',
xAxis: 'Date',
yAxis: 'Price',
},
};
// export const FromURLMulti: Story = {
// name: 'Line chart from URL Multi Column',
// args: {
// data: {
// url: 'https://raw.githubusercontent.com/datasets/sea-level-rise/refs/heads/main/data/epa-sea-level.csv',
// },
// title: 'Sea Level Rise (1880-2023)',
// xAxis: 'Year',
// yAxis: ["CSIRO Adjusted Sea Level", "NOAA Adjusted Sea Level"],
// xAxisType: 'temporal',
// xAxisTimeUnit: 'year',
// yAxisType: 'quantitative'
// },
// };
// export const MultipleSeriesMissingValues: Story = {
// name: 'Line chart with missing values',
// args: {
// data: {
// values: [
// { year: '2020', seriesA: 10, seriesB: 15 },
// { year: '2021', seriesA: 20 }, // seriesB missing
// { year: '2022', seriesA: 15 }, // seriesB missing
// { year: '2023', seriesB: 30 }, // seriesA missing
// { year: '2024', seriesA: 25, seriesB: 35 },
// { year: '2024', seriesA: 20, seriesB: 40 },
// { year: '2024', seriesB: 45 },
// ],
// },
// title: 'Handling Missing Data Points',
// xAxis: 'year',
// yAxis: ['seriesA', 'seriesB'],
// xAxisType: 'temporal',
// xAxisTimeUnit: 'year',
// yAxisType: 'quantitative'
// },
// };

View File

@ -4,29 +4,34 @@ import { Map, MapProps } from '../src/components/Map';
// More on how to set up stories at: https://storybook.js.org/docs/react/writing-stories/introduction
const meta: Meta = {
title: 'Components/Map',
title: 'Components/Geospatial/Map',
component: Map,
tags: ['autodocs'],
argTypes: {
layers: {
description:
'Data to be displayed.\n\n GeoJSON Object \n\nOR\n\n URL to GeoJSON Object',
'Array of layers to be displayed on the map. Should be an object with: \n\n \
`data`: object with either a `url` property pointing to a GeoJSON file or a `geojson` property with a GeoJSON object. \n\n \
`name`: name of the layer. \n\n \
`colorscale`: object with a `starting` and `ending` colors that will be used to create a gradient and color the map. \n\n \
`tooltip`: `true` to show all available features on the tooltip, object with a `propNames` property as an array of strings to choose which features to display. \n\n',
},
title: {
description: 'Title to display on the map. Optional.',
description: 'Title to display on the map.',
},
center: {
description: 'Initial coordinates of the center of the map',
},
zoom: {
description: 'Zoom level',
description: 'Initial zoom level',
},
style: {
description: "Styles for the container"
description: "CSS styles to be applied to the map's container.",
},
autoZoomConfiguration: {
description: "Configuration to auto zoom in the specified layer data"
}
description:
"Pass a layer's name to automatically zoom to the bounding area of a layer.",
},
},
};
@ -38,9 +43,15 @@ type Story = StoryObj<MapProps>;
export const GeoJSONPolygons: Story = {
name: 'GeoJSON polygons map',
args: {
tileLayerName:'MapBox',
tileLayerOptions:{
accessToken : 'pk.eyJ1Ijoid2lsbHktcGFsbWFyZWpvIiwiYSI6ImNqNzk5NmRpNDFzb2cyeG9sc2luMHNjajUifQ.lkoVRFSI8hOLH4uJeOzwXw',
},
layers: [
{
data: 'https://d2ad6b4ur7yvpq.cloudfront.net/naturalearth-3.3.0/ne_10m_geography_marine_polys.geojson',
data: {
url: 'https://d2ad6b4ur7yvpq.cloudfront.net/naturalearth-3.3.0/ne_10m_geography_marine_polys.geojson',
},
name: 'Polygons',
tooltip: { propNames: ['name'] },
colorScale: {
@ -60,7 +71,9 @@ export const GeoJSONPoints: Story = {
args: {
layers: [
{
data: 'https://opendata.arcgis.com/datasets/9c58741995174fbcb017cf46c8a42f4b_25.geojson',
data: {
url: 'https://opendata.arcgis.com/datasets/9c58741995174fbcb017cf46c8a42f4b_25.geojson',
},
name: 'Points',
tooltip: { propNames: ['Location'] },
},
@ -76,12 +89,16 @@ export const GeoJSONMultipleLayers: Story = {
args: {
layers: [
{
data: 'https://opendata.arcgis.com/datasets/9c58741995174fbcb017cf46c8a42f4b_25.geojson',
data: {
url: 'https://opendata.arcgis.com/datasets/9c58741995174fbcb017cf46c8a42f4b_25.geojson',
},
name: 'Points',
tooltip: true,
},
{
data: 'https://d2ad6b4ur7yvpq.cloudfront.net/naturalearth-3.3.0/ne_10m_geography_marine_polys.geojson',
data: {
url: 'https://d2ad6b4ur7yvpq.cloudfront.net/naturalearth-3.3.0/ne_10m_geography_marine_polys.geojson',
},
name: 'Polygons',
tooltip: true,
colorScale: {
@ -94,19 +111,23 @@ export const GeoJSONMultipleLayers: Story = {
center: { latitude: 45, longitude: 0 },
zoom: 2,
},
}
};
export const GeoJSONMultipleLayersWithAutoZoomInSpecifiedLayer: Story = {
name: 'GeoJSON polygons and points map with auto zoom in the points layer',
args: {
layers: [
{
data: 'https://opendata.arcgis.com/datasets/9c58741995174fbcb017cf46c8a42f4b_25.geojson',
data: {
url: 'https://opendata.arcgis.com/datasets/9c58741995174fbcb017cf46c8a42f4b_25.geojson',
},
name: 'Points',
tooltip: true,
},
{
data: 'https://d2ad6b4ur7yvpq.cloudfront.net/naturalearth-3.3.0/ne_10m_geography_marine_polys.geojson',
data: {
url: 'https://d2ad6b4ur7yvpq.cloudfront.net/naturalearth-3.3.0/ne_10m_geography_marine_polys.geojson',
},
name: 'Polygons',
tooltip: true,
colorScale: {
@ -119,7 +140,7 @@ export const GeoJSONMultipleLayersWithAutoZoomInSpecifiedLayer: Story = {
center: { latitude: 45, longitude: 0 },
zoom: 2,
autoZoomConfiguration: {
layerName: 'Points'
}
layerName: 'Points',
},
},
};

View File

@ -1,3 +1,6 @@
// NOTE: this component was renamed with .bkp so that it's hidden
// from the Storybook app
import type { Meta, StoryObj } from '@storybook/react';
import React from 'react';
import OpenLayers from '../src/components/OpenLayers/OpenLayers';

View File

@ -3,19 +3,21 @@ import type { Meta, StoryObj } from '@storybook/react';
import { PdfViewer, PdfViewerProps } from '../src/components/PdfViewer';
const meta: Meta = {
title: 'Components/PdfViewer',
title: 'Components/Embedding/PdfViewer',
component: PdfViewer,
tags: ['autodocs'],
argTypes: {
url: {
description: 'URL to PDF file',
data: {
description:
'Object with a `url` property pointing to the PDF file to be displayed, e.g.: `{ url: "https://cdn.filestackcontent.com/wcrjf9qPTCKXV3hMXDwK" }`.',
},
parentClassName: {
description: 'Classname for the parent div of the pdf viewer',
},
layour: {
description:
'Set to true if you want to have a layout with zoom level, page count, printing button etc',
'HTML classes to be applied to the container of the PDF viewer. [Tailwind](https://tailwindcss.com/) classes, such as `h-96` to define the height of the component, can be used on this field.',
},
layout: {
description:
'Set to `true` if you want to display a layout with zoom level, page count, printing button and other controls.',
defaultValue: false,
},
},
@ -25,26 +27,23 @@ export default meta;
type Story = StoryObj<PdfViewerProps>;
export const PdfViewerStory: Story = {
name: 'PdfViewer',
export const PdfViewerStoryWithoutControlsLayout: Story = {
name: 'PDF Viewer without controls layout',
args: {
url: 'https://cdn.filestackcontent.com/wcrjf9qPTCKXV3hMXDwK',
},
};
export const PdfViewerStoryWithLayout: Story = {
name: 'PdfViewer with the default layout',
args: {
url: 'https://cdn.filestackcontent.com/wcrjf9qPTCKXV3hMXDwK',
layout: true,
},
};
export const PdfViewerStoryWithHeight: Story = {
name: 'PdfViewer with a custom height',
args: {
url: 'https://cdn.filestackcontent.com/wcrjf9qPTCKXV3hMXDwK',
data: {
url: 'https://cdn.filestackcontent.com/wcrjf9qPTCKXV3hMXDwK',
},
parentClassName: 'h-96',
},
};
export const PdfViewerStoryWithControlsLayout: Story = {
name: 'PdfViewer with controls layout',
args: {
data: {
url: 'https://cdn.filestackcontent.com/wcrjf9qPTCKXV3hMXDwK',
},
layout: true,
parentClassName: 'h-96',
layout: true,
},
};

View File

@ -0,0 +1,49 @@
import type { Meta, StoryObj } from '@storybook/react';
import { Plotly } from '../src/components/Plotly';
// More on how to set up stories at: https://storybook.js.org/docs/react/writing-stories/introduction
const meta: Meta = {
title: 'Components/Charts/Plotly',
component: Plotly,
tags: ['autodocs'],
argTypes: {
data: {
description:
"Plotly's `data` prop. You can find references on how to use these props at https://github.com/plotly/react-plotly.js/#basic-props.",
},
layout: {
description:
"Plotly's `layout` prop. You can find references on how to use these props at https://github.com/plotly/react-plotly.js/#basic-props.",
},
},
};
export default meta;
type Story = StoryObj<any>;
// More on writing stories with args: https://storybook.js.org/docs/react/writing-stories/args
export const Primary: Story = {
name: 'Line chart',
args: {
data: [
{
x: [1, 2, 3],
y: [2, 6, 3],
type: 'scatter',
mode: 'lines+markers',
marker: { color: 'red' },
},
],
layout: {
title: 'Chart built with Plotly',
xaxis: {
title: 'x Axis',
},
yaxis: {
title: 'y Axis',
},
},
},
};

View File

@ -0,0 +1,102 @@
import type { Meta, StoryObj } from '@storybook/react';
import {
PlotlyBarChart,
PlotlyBarChartProps,
} from '../src/components/PlotlyBarChart';
// More on how to set up stories at: https://storybook.js.org/docs/react/writing-stories/introduction
const meta: Meta = {
title: 'Components/Charts/PlotlyBarChart',
component: PlotlyBarChart,
tags: ['autodocs'],
argTypes: {
data: {
description:
'Data to be displayed. \n\n \
Must be an object with one of the following properties: `url`, `values` or `csv` \n\n \
`url`: URL pointing to a CSV file. \n\n \
`values`: array of objects (check out [this example](/?path=/story/components-plotlybarchart--from-data-points)) \n\n \
`csv`: string with valid CSV (check out [this example](/?path=/story/components-plotlybarchart--from-inline-csv)) \n\n \
',
},
bytes: {
// TODO: likely this should be an extra option on the data parameter,
// specific to URLs
description:
"How many bytes to read from the url so that the entire file doesn's have to be fetched.",
},
parsingConfig: {
description:
'If using URL or CSV, this parsing config will be used to parse the data. Check https://www.papaparse.com/ for more info.',
},
title: {
description: 'Title to display on the chart.',
},
// TODO: commented out because this doesn't work
// lineLabel: {
// description:
// 'Label to display on the line, Optional, will use yAxis if not provided',
// },
xAxis: {
description:
'Name of the column header or object property that represents the X-axis on the data.',
},
yAxis: {
description:
'Name of the column header or object property that represents the Y-axis on the data.',
},
uniqueId: {
description: 'Provide a unique ID to help with cache revalidation of the fetched data.'
}
},
};
export default meta;
type Story = StoryObj<PlotlyBarChartProps>;
export const FromDataPoints: Story = {
name: 'Bar chart from array of data points',
args: {
data: {
values: [
{ year: '1850', temperature: -0.41765878 },
{ year: '1851', temperature: -0.2333498 },
{ year: '1852', temperature: -0.22939907 },
{ year: '1853', temperature: -0.27035445 },
{ year: '1854', temperature: -0.29163003 },
],
},
xAxis: 'year',
yAxis: 'temperature',
},
};
export const FromURL: Story = {
name: 'Bar chart from URL',
args: {
title: 'Apple Stock Prices',
data: {
url: 'https://raw.githubusercontent.com/plotly/datasets/master/finance-charts-apple.csv',
},
xAxis: 'Date',
yAxis: 'AAPL.Open',
},
};
export const FromInlineCSV: Story = {
name: 'Bar chart from inline CSV',
args: {
title: 'Apple Stock Prices',
data: {
csv: `Date,AAPL.Open,AAPL.High,AAPL.Low,AAPL.Close,AAPL.Volume,AAPL.Adjusted,dn,mavg,up,direction
2015-02-17,127.489998,128.880005,126.919998,127.830002,63152400,122.905254,106.7410523,117.9276669,129.1142814,Increasing
2015-02-18,127.629997,128.779999,127.449997,128.720001,44891700,123.760965,107.842423,118.9403335,130.0382439,Increasing
2015-02-19,128.479996,129.029999,128.330002,128.449997,37362400,123.501363,108.8942449,119.8891668,130.8840887,Decreasing
2015-02-20,128.619995,129.5,128.050003,129.5,48948400,124.510914,109.7854494,120.7635001,131.7415509,Increasing`,
},
xAxis: 'Date',
yAxis: 'AAPL.Open',
},
};

View File

@ -0,0 +1,101 @@
import type { Meta, StoryObj } from '@storybook/react';
import {
PlotlyLineChart,
PlotlyLineChartProps,
} from '../src/components/PlotlyLineChart';
const meta: Meta = {
title: 'Components/Charts/PlotlyLineChart',
component: PlotlyLineChart,
tags: ['autodocs'],
argTypes: {
data: {
description:
'Data to be displayed. \n\n \
Must be an object with one of the following properties: `url`, `values` or `csv` \n\n \
`url`: URL pointing to a CSV file. \n\n \
`values`: array of objects. \n\n \
`csv`: string with valid CSV. \n\n \
',
},
bytes: {
// TODO: likely this should be an extra option on the data parameter,
// specific to URLs
description:
"How many bytes to read from the url so that the entire file doesn's have to be fetched.",
},
parsingConfig: {
description:
'If using URL or CSV, this parsing config will be used to parse the data. Check https://www.papaparse.com/ for more info',
},
title: {
description: 'Title to display on the chart.',
},
lineLabel: {
description:
'Label to display on the line, will use yAxis if not provided',
},
xAxis: {
description:
'Name of the column header or object property that represents the X-axis on the data.',
},
yAxis: {
description:
'Name of the column header or object property that represents the Y-axis on the data.',
},
uniqueId: {
description:
'Provide a unique ID to help with cache revalidation of the fetched data.',
},
},
};
export default meta;
type Story = StoryObj<PlotlyLineChartProps>;
export const FromDataPoints: Story = {
name: 'Line chart from array of data points',
args: {
data: {
values: [
{ year: '1850', temperature: -0.41765878 },
{ year: '1851', temperature: -0.2333498 },
{ year: '1852', temperature: -0.22939907 },
{ year: '1853', temperature: -0.27035445 },
{ year: '1854', temperature: -0.29163003 },
],
},
xAxis: 'year',
yAxis: 'temperature',
},
};
export const FromURL: Story = {
name: 'Line chart from URL',
args: {
title: 'Oil Price x Year',
data: {
url: 'https://raw.githubusercontent.com/datasets/oil-prices/main/data/wti-year.csv',
},
xAxis: 'Date',
yAxis: 'Price',
},
};
export const FromInlineCSV: Story = {
name: 'Bar chart from inline CSV',
args: {
title: 'Apple Stock Prices',
data: {
csv: `Date,AAPL.Open,AAPL.High,AAPL.Low,AAPL.Close,AAPL.Volume,AAPL.Adjusted,dn,mavg,up,direction
2015-02-17,127.489998,128.880005,126.919998,127.830002,63152400,122.905254,106.7410523,117.9276669,129.1142814,Increasing
2015-02-18,127.629997,128.779999,127.449997,128.720001,44891700,123.760965,107.842423,118.9403335,130.0382439,Increasing
2015-02-19,128.479996,129.029999,128.330002,128.449997,37362400,123.501363,108.8942449,119.8891668,130.8840887,Decreasing
2015-02-20,128.619995,129.5,128.050003,129.5,48948400,124.510914,109.7854494,120.7635001,131.7415509,Increasing`,
},
xAxis: 'Date',
yAxis: 'AAPL.Open',
},
};

View File

@ -1,10 +1,13 @@
// NOTE: this component was renamed with .bkp so that it's hidden
// from the Storybook app
import type { Meta, StoryObj } from '@storybook/react';
import { Table, TableProps } from '../src/components/Table';
// More on how to set up stories at: https://storybook.js.org/docs/react/writing-stories/introduction
const meta: Meta = {
title: 'Components/Table',
title: 'Components/Tabular/Table',
component: Table,
tags: ['autodocs'],
argTypes: {

View File

@ -4,9 +4,19 @@ import { Vega } from '../src/components/Vega';
// More on how to set up stories at: https://storybook.js.org/docs/react/writing-stories/introduction
const meta: Meta = {
title: 'Components/Vega',
title: 'Components/Charts/Vega',
component: Vega,
tags: ['autodocs'],
argTypes: {
data: {
description:
"Vega's `data` prop. You can find references on how to use this prop at https://vega.github.io/vega/docs/data/",
},
spec: {
description:
"Vega's `spec` prop. You can find references on how to use this prop at https://vega.github.io/vega/docs/specification/",
},
},
};
export default meta;
@ -15,7 +25,7 @@ type Story = StoryObj<any>;
// More on writing stories with args: https://storybook.js.org/docs/react/writing-stories/args
export const Primary: Story = {
name: 'Chart built with Vega',
name: 'Bar chart',
args: {
data: {
table: [

View File

@ -4,7 +4,7 @@ import { VegaLite } from '../src/components/VegaLite';
// More on how to set up stories at: https://storybook.js.org/docs/react/writing-stories/introduction
const meta: Meta = {
title: 'Components/VegaLite',
title: 'Components/Charts/VegaLite',
component: VegaLite,
tags: ['autodocs'],
argTypes: {
@ -25,7 +25,7 @@ type Story = StoryObj<any>;
// More on writing stories with args: https://storybook.js.org/docs/react/writing-stories/args
export const Primary: Story = {
name: 'Chart built with Vega Lite',
name: 'Bar chart',
args: {
data: {
table: [

View File

@ -53,7 +53,7 @@ export const Nav: React.FC<Props> = ({
<nav className="flex justify-between">
{/* Mobile navigation */}
<div className="mr-2 sm:mr-4 flex lg:hidden">
<NavMobile links={links}>{children}</NavMobile>
<NavMobile {...{title, links, social, search, defaultTheme, themeToggleIcon}}>{children}</NavMobile>
</div>
{/* Non-mobile navigation */}
<div className="flex flex-none items-center">

View File

@ -4,20 +4,16 @@ import { useRouter } from "next/router.js";
import { useEffect, useState } from "react";
import { SearchContext, SearchField } from "../Search";
import { MenuIcon, CloseIcon } from "../Icons";
import { NavLink, SearchProviderConfig } from "../types";
import type { NavConfig, ThemeConfig } from "./Nav";
interface Props extends React.PropsWithChildren {
author?: string;
links?: Array<NavLink>;
search?: SearchProviderConfig;
}
interface Props extends NavConfig, ThemeConfig, React.PropsWithChildren {}
// TODO why mobile navigation only accepts author and regular nav accepts different things like title, logo, version
// TODO: Search doesn't appear
export const NavMobile: React.FC<Props> = ({
children,
title,
links,
search,
author,
}) => {
const router = useRouter();
const [isOpen, setIsOpen] = useState(false);
@ -77,8 +73,8 @@ export const NavMobile: React.FC<Props> = ({
legacyBehavior
>
{/* <Logomark className="h-9 w-9" /> */}
<div className="font-extrabold text-primary dark:text-primary-dark text-2xl ml-6">
{author}
<div className="font-extrabold text-primary dark:text-primary-dark text-lg ml-6">
{title}
</div>
</Link>
</div>
@ -106,9 +102,7 @@ export const NavMobile: React.FC<Props> = ({
))}
</ul>
)}
{/* <div className="pt-6 border border-t-2">
{children}
</div> */}
<div className="pt-6">{children}</div>
</Dialog.Panel>
</Dialog>
</>

View File

@ -46,8 +46,8 @@ export const SiteToc: React.FC<Props> = ({ currentPath, nav }) => {
return (
<nav data-testid="lhs-sidebar" className="flex flex-col space-y-3 text-sm">
{sortNavGroupChildren(nav).map((n) => (
<NavComponent item={n} isActive={false} />
{sortNavGroupChildren(nav).map((n, index) => (
<NavComponent key={index} item={n} isActive={false} />
))}
</nav>
);
@ -96,8 +96,8 @@ const NavComponent: React.FC<{
leaveTo="transform scale-95 opacity-0"
>
<Disclosure.Panel className="flex flex-col space-y-3 pl-5 mt-3">
{sortNavGroupChildren(item.children).map((subItem) => (
<NavComponent item={subItem} isActive={false} />
{sortNavGroupChildren(item.children).map((subItem, index) => (
<NavComponent key={index} item={subItem} isActive={false} />
))}
</Disclosure.Panel>
</Transition>

View File

@ -1,5 +1,11 @@
# @portaljs/remark-wiki-link
## 1.2.0
### Minor Changes
- [#1084](https://github.com/datopian/datahub/pull/1084) [`57952e08`](https://github.com/datopian/datahub/commit/57952e0817770138881e7492dc9f43e9910b56a8) Thanks [@mohamedsalem401](https://github.com/mohamedsalem401)! - Add image resize feature
## 1.1.2
### Patch Changes

View File

@ -1,6 +1,6 @@
{
"name": "@portaljs/remark-wiki-link",
"version": "1.1.2",
"version": "1.2.0",
"description": "Parse and render wiki-style links in markdown especially Obsidian style links.",
"repository": {
"type": "git",

View File

@ -1,23 +1,23 @@
import { isSupportedFileFormat } from "./isSupportedFileFormat";
import { isSupportedFileFormat } from './isSupportedFileFormat';
const defaultWikiLinkResolver = (target: string) => {
// for [[#heading]] links
if (!target) {
return [];
}
let permalink = target.replace(/\/index$/, "");
let permalink = target.replace(/\/index$/, '');
// TODO what to do with [[index]] link?
if (permalink.length === 0) {
permalink = "/";
permalink = '/';
}
return [permalink];
};
export interface FromMarkdownOptions {
pathFormat?:
| "raw" // default; use for regular relative or absolute paths
| "obsidian-absolute" // use for Obsidian-style absolute paths (with no leading slash)
| "obsidian-short"; // use for Obsidian-style shortened paths (shortest path possible)
| 'raw' // default; use for regular relative or absolute paths
| 'obsidian-absolute' // use for Obsidian-style absolute paths (with no leading slash)
| 'obsidian-short'; // use for Obsidian-style shortened paths (shortest path possible)
permalinks?: string[]; // list of permalinks to match possible permalinks of a wiki link against
wikiLinkResolver?: (name: string) => string[]; // function to resolve wiki links to an array of possible permalinks
newClassName?: string; // class name to add to links that don't have a matching permalink
@ -25,14 +25,23 @@ export interface FromMarkdownOptions {
hrefTemplate?: (permalink: string) => string; // function to generate the href attribute of a link
}
export function getImageSize(size: string) {
// eslint-disable-next-line prefer-const
let [width, height] = size.split('x');
if (!height) height = width;
return { width, height };
}
// mdas-util-from-markdown extension
// https://github.com/syntax-tree/mdast-util-from-markdown#extension
function fromMarkdown(opts: FromMarkdownOptions = {}) {
const pathFormat = opts.pathFormat || "raw";
const pathFormat = opts.pathFormat || 'raw';
const permalinks = opts.permalinks || [];
const wikiLinkResolver = opts.wikiLinkResolver || defaultWikiLinkResolver;
const newClassName = opts.newClassName || "new";
const wikiLinkClassName = opts.wikiLinkClassName || "internal";
const newClassName = opts.newClassName || 'new';
const wikiLinkClassName = opts.wikiLinkClassName || 'internal';
const defaultHrefTemplate = (permalink: string) => permalink;
const hrefTemplate = opts.hrefTemplate || defaultHrefTemplate;
@ -44,9 +53,9 @@ function fromMarkdown(opts: FromMarkdownOptions = {}) {
function enterWikiLink(token) {
this.enter(
{
type: "wikiLink",
type: 'wikiLink',
data: {
isEmbed: token.isType === "embed",
isEmbed: token.isType === 'embed',
target: null, // the target of the link, e.g. "Foo Bar#Heading" in "[[Foo Bar#Heading]]"
alias: null, // the alias of the link, e.g. "Foo" in "[[Foo Bar|Foo]]"
permalink: null, // TODO shouldn't this be named just "link"?
@ -80,18 +89,18 @@ function fromMarkdown(opts: FromMarkdownOptions = {}) {
} = wikiLink;
// eslint-disable-next-line no-useless-escape
const wikiLinkWithHeadingPattern = /^(.*?)(#.*)?$/u;
const [, path, heading = ""] = target.match(wikiLinkWithHeadingPattern);
const [, path, heading = ''] = target.match(wikiLinkWithHeadingPattern);
const possibleWikiLinkPermalinks = wikiLinkResolver(path);
const matchingPermalink = permalinks.find((e) => {
return possibleWikiLinkPermalinks.find((p) => {
if (pathFormat === "obsidian-short") {
if (pathFormat === 'obsidian-short') {
if (e === p || e.endsWith(p)) {
return true;
}
} else if (pathFormat === "obsidian-absolute") {
if (e === "/" + p) {
} else if (pathFormat === 'obsidian-absolute') {
if (e === '/' + p) {
return true;
}
} else {
@ -106,20 +115,19 @@ function fromMarkdown(opts: FromMarkdownOptions = {}) {
// TODO this is ugly
const link =
matchingPermalink ||
(pathFormat === "obsidian-absolute"
? "/" + possibleWikiLinkPermalinks[0]
(pathFormat === 'obsidian-absolute'
? '/' + possibleWikiLinkPermalinks[0]
: possibleWikiLinkPermalinks[0]) ||
"";
'';
wikiLink.data.exists = !!matchingPermalink;
wikiLink.data.permalink = link;
// remove leading # if the target is a heading on the same page
const displayName = alias || target.replace(/^#/, "");
const headingId = heading.replace(/\s+/g, "-").toLowerCase();
const displayName = alias || target.replace(/^#/, '');
const headingId = heading.replace(/\s+/g, '-').toLowerCase();
let classNames = wikiLinkClassName;
if (!matchingPermalink) {
classNames += " " + newClassName;
classNames += ' ' + newClassName;
}
if (isEmbed) {
@ -127,44 +135,55 @@ function fromMarkdown(opts: FromMarkdownOptions = {}) {
if (!isSupportedFormat) {
// Temporarily render note transclusion as a regular wiki link
if (!format) {
wikiLink.data.hName = "a";
wikiLink.data.hName = 'a';
wikiLink.data.hProperties = {
className: classNames + " " + "transclusion",
className: classNames + ' ' + 'transclusion',
href: hrefTemplate(link) + headingId,
};
wikiLink.data.hChildren = [{ type: "text", value: displayName }];
wikiLink.data.hChildren = [{ type: 'text', value: displayName }];
} else {
wikiLink.data.hName = "p";
wikiLink.data.hName = 'p';
wikiLink.data.hChildren = [
{
type: "text",
type: 'text',
value: `![[${target}]]`,
},
];
}
} else if (format === "pdf") {
wikiLink.data.hName = "iframe";
} else if (format === 'pdf') {
wikiLink.data.hName = 'iframe';
wikiLink.data.hProperties = {
className: classNames,
width: "100%",
width: '100%',
src: `${hrefTemplate(link)}#toolbar=0`,
};
} else {
wikiLink.data.hName = "img";
wikiLink.data.hProperties = {
className: classNames,
src: hrefTemplate(link),
alt: displayName,
};
const hasDimensions = alias && /^\d+(x\d+)?$/.test(alias);
// Take the target as alt text except if alt name was provided [[target|alt text]]
const altText = hasDimensions || !alias ? target : alias;
wikiLink.data.hName = 'img';
wikiLink.data.hProperties = {
className: classNames,
src: hrefTemplate(link),
alt: altText
};
if (hasDimensions) {
const { width, height } = getImageSize(alias as string);
Object.assign(wikiLink.data.hProperties, {
width,
height,
});
}
}
} else {
wikiLink.data.hName = "a";
wikiLink.data.hName = 'a';
wikiLink.data.hProperties = {
className: classNames,
href: hrefTemplate(link) + headingId,
};
wikiLink.data.hChildren = [{ type: "text", value: displayName }];
wikiLink.data.hChildren = [{ type: 'text', value: displayName }];
}
}

View File

@ -1,23 +1,24 @@
import { isSupportedFileFormat } from "./isSupportedFileFormat";
import { getImageSize } from './fromMarkdown';
import { isSupportedFileFormat } from './isSupportedFileFormat';
const defaultWikiLinkResolver = (target: string) => {
// for [[#heading]] links
if (!target) {
return [];
}
let permalink = target.replace(/\/index$/, "");
let permalink = target.replace(/\/index$/, '');
// TODO what to do with [[index]] link?
if (permalink.length === 0) {
permalink = "/";
permalink = '/';
}
return [permalink];
};
export interface HtmlOptions {
pathFormat?:
| "raw" // default; use for regular relative or absolute paths
| "obsidian-absolute" // use for Obsidian-style absolute paths (with no leading slash)
| "obsidian-short"; // use for Obsidian-style shortened paths (shortest path possible)
| 'raw' // default; use for regular relative or absolute paths
| 'obsidian-absolute' // use for Obsidian-style absolute paths (with no leading slash)
| 'obsidian-short'; // use for Obsidian-style shortened paths (shortest path possible)
permalinks?: string[]; // list of permalinks to match possible permalinks of a wiki link against
wikiLinkResolver?: (name: string) => string[]; // function to resolve wiki links to an array of possible permalinks
newClassName?: string; // class name to add to links that don't have a matching permalink
@ -28,11 +29,11 @@ export interface HtmlOptions {
// Micromark HtmlExtension
// https://github.com/micromark/micromark#htmlextension
function html(opts: HtmlOptions = {}) {
const pathFormat = opts.pathFormat || "raw";
const pathFormat = opts.pathFormat || 'raw';
const permalinks = opts.permalinks || [];
const wikiLinkResolver = opts.wikiLinkResolver || defaultWikiLinkResolver;
const newClassName = opts.newClassName || "new";
const wikiLinkClassName = opts.wikiLinkClassName || "internal";
const newClassName = opts.newClassName || 'new';
const wikiLinkClassName = opts.wikiLinkClassName || 'internal';
const defaultHrefTemplate = (permalink: string) => permalink;
const hrefTemplate = opts.hrefTemplate || defaultHrefTemplate;
@ -41,21 +42,21 @@ function html(opts: HtmlOptions = {}) {
}
function enterWikiLink() {
let stack = this.getData("wikiLinkStack");
if (!stack) this.setData("wikiLinkStack", (stack = []));
let stack = this.getData('wikiLinkStack');
if (!stack) this.setData('wikiLinkStack', (stack = []));
stack.push({});
}
function exitWikiLinkTarget(token) {
const target = this.sliceSerialize(token);
const current = top(this.getData("wikiLinkStack"));
const current = top(this.getData('wikiLinkStack'));
current.target = target;
}
function exitWikiLinkAlias(token) {
const alias = this.sliceSerialize(token);
const current = top(this.getData("wikiLinkStack"));
const current = top(this.getData('wikiLinkStack'));
current.alias = alias;
}
@ -111,7 +112,9 @@ function html(opts: HtmlOptions = {}) {
// Temporarily render note transclusion as a regular wiki link
if (!format) {
this.tag(
`<a href="${hrefTemplate(link + headingId)}" class="${classNames} transclusion">`
`<a href="${hrefTemplate(
link + headingId
)}" class="${classNames} transclusion">`
);
this.raw(displayName);
this.tag("</a>");
@ -125,11 +128,18 @@ function html(opts: HtmlOptions = {}) {
)}#toolbar=0" class="${classNames}" />`
);
} else {
this.tag(
`<img src="${hrefTemplate(
link
)}" alt="${displayName}" class="${classNames}" />`
);
const hasDimensions = alias && /^\d+(x\d+)?$/.test(alias);
// Take the target as alt text except if alt name was provided [[target|alt text]]
const altText = hasDimensions || !alias ? target : alias;
let imgAttributes = `src="${hrefTemplate(
link
)}" alt="${altText}" class="${classNames}"`;
if (hasDimensions) {
const { width, height } = getImageSize(alias as string);
imgAttributes += ` width="${width}" height="${height}"`;
}
this.tag(`<img ${imgAttributes} />`);
}
} else {
this.tag(

View File

@ -1,42 +1,203 @@
import { toMarkdown } from "mdast-util-wiki-link";
import { syntax, SyntaxOptions } from "./syntax";
import { fromMarkdown, FromMarkdownOptions } from "./fromMarkdown";
import { isSupportedFileFormat } from './isSupportedFileFormat';
let warningIssued = false;
const defaultWikiLinkResolver = (target: string) => {
// for [[#heading]] links
if (!target) {
return [];
}
let permalink = target.replace(/\/index$/, '');
// TODO what to do with [[index]] link?
if (permalink.length === 0) {
permalink = '/';
}
return [permalink];
};
type RemarkWikiLinkOptions = FromMarkdownOptions & SyntaxOptions;
export interface FromMarkdownOptions {
pathFormat?:
| 'raw' // default; use for regular relative or absolute paths
| 'obsidian-absolute' // use for Obsidian-style absolute paths (with no leading slash)
| 'obsidian-short'; // use for Obsidian-style shortened paths (shortest path possible)
permalinks?: string[]; // list of permalinks to match possible permalinks of a wiki link against
wikiLinkResolver?: (name: string) => string[]; // function to resolve wiki links to an array of possible permalinks
newClassName?: string; // class name to add to links that don't have a matching permalink
wikiLinkClassName?: string; // class name to add to all wiki links
hrefTemplate?: (permalink: string) => string; // function to generate the href attribute of a link
}
function remarkWikiLink(opts: RemarkWikiLinkOptions = {}) {
const data = this.data(); // this is a reference to the processor
export function getImageSize(size: string) {
// eslint-disable-next-line prefer-const
let [width, height] = size.split('x');
function add(field, value) {
if (data[field]) data[field].push(value);
else data[field] = [value];
if (!height) height = width;
return { width, height };
}
// mdas-util-from-markdown extension
// https://github.com/syntax-tree/mdast-util-from-markdown#extension
function fromMarkdown(opts: FromMarkdownOptions = {}) {
const pathFormat = opts.pathFormat || 'raw';
const permalinks = opts.permalinks || [];
const wikiLinkResolver = opts.wikiLinkResolver || defaultWikiLinkResolver;
const newClassName = opts.newClassName || 'new';
const wikiLinkClassName = opts.wikiLinkClassName || 'internal';
const defaultHrefTemplate = (permalink: string) => permalink;
const hrefTemplate = opts.hrefTemplate || defaultHrefTemplate;
function top(stack) {
return stack[stack.length - 1];
}
if (
!warningIssued &&
((this.Parser &&
this.Parser.prototype &&
this.Parser.prototype.blockTokenizers) ||
(this.Compiler &&
this.Compiler.prototype &&
this.Compiler.prototype.visitors))
) {
warningIssued = true;
console.warn(
"[remark-wiki-link] Warning: please upgrade to remark 13 to use this plugin"
function enterWikiLink(token) {
this.enter(
{
type: 'wikiLink',
data: {
isEmbed: token.isType === 'embed',
target: null, // the target of the link, e.g. "Foo Bar#Heading" in "[[Foo Bar#Heading]]"
alias: null, // the alias of the link, e.g. "Foo" in "[[Foo Bar|Foo]]"
permalink: null, // TODO shouldn't this be named just "link"?
exists: null, // TODO is this even needed here?
// fields for mdast-util-to-hast (used e.g. by remark-rehype)
hName: null,
hProperties: null,
hChildren: null,
},
},
token
);
}
// add extensions to packages used by remark-parse
// micromark extensions
add("micromarkExtensions", syntax(opts));
// mdast-util-from-markdown extensions
add("fromMarkdownExtensions", fromMarkdown(opts));
// mdast-util-to-markdown extensions
add("toMarkdownExtensions", toMarkdown(opts));
function exitWikiLinkTarget(token) {
const target = this.sliceSerialize(token);
const current = top(this.stack);
current.data.target = target;
}
function exitWikiLinkAlias(token) {
const alias = this.sliceSerialize(token);
const current = top(this.stack);
current.data.alias = alias;
}
function exitWikiLink(token) {
const wikiLink = top(this.stack)
const {
data: {isEmbed, target, alias},
} = wikiLink;
this.exit(token);
// eslint-disable-next-line no-useless-escape
const wikiLinkWithHeadingPattern = /^(.*?)(#.*)?$/u;
const [, path, heading = ''] = target.match(wikiLinkWithHeadingPattern);
const possibleWikiLinkPermalinks = wikiLinkResolver(path);
const matchingPermalink = permalinks.find((e) => {
return possibleWikiLinkPermalinks.find((p) => {
if (pathFormat === 'obsidian-short') {
if (e === p || e.endsWith(p)) {
return true;
}
} else if (pathFormat === 'obsidian-absolute') {
if (e === '/' + p) {
return true;
}
} else {
if (e === p) {
return true;
}
}
return false;
});
});
// TODO this is ugly
const link =
matchingPermalink ||
(pathFormat === 'obsidian-absolute'
? '/' + possibleWikiLinkPermalinks[0]
: possibleWikiLinkPermalinks[0]) ||
'';
wikiLink.data.exists = !!matchingPermalink;
wikiLink.data.permalink = link;
// remove leading # if the target is a heading on the same page
const displayName = alias || target.replace(/^#/, '');
const headingId = heading.replace(/\s+/g, '-').toLowerCase();
let classNames = wikiLinkClassName;
if (!matchingPermalink) {
classNames += ' ' + newClassName;
}
if (isEmbed) {
const [isSupportedFormat, format] = isSupportedFileFormat(target);
if (!isSupportedFormat) {
// Temporarily render note transclusion as a regular wiki link
if (!format) {
wikiLink.data.hName = 'a';
wikiLink.data.hProperties = {
className: classNames + ' ' + 'transclusion',
href: hrefTemplate(link) + headingId,
};
wikiLink.data.hChildren = [{ type: 'text', value: displayName }];
} else {
wikiLink.data.hName = 'p';
wikiLink.data.hChildren = [
{
type: 'text',
value: `![[${target}]]`,
},
];
}
} else if (format === 'pdf') {
wikiLink.data.hName = 'iframe';
wikiLink.data.hProperties = {
className: classNames,
width: '100%',
src: `${hrefTemplate(link)}#toolbar=0`,
};
} else {
const hasDimensions = alias && /^\d+(x\d+)?$/.test(alias);
// Take the target as alt text except if alt name was provided [[target|alt text]]
const altText = hasDimensions || !alias ? target : alias;
wikiLink.data.hName = 'img';
wikiLink.data.hProperties = {
className: classNames,
src: hrefTemplate(link),
alt: altText
};
if (hasDimensions) {
const { width, height } = getImageSize(alias as string);
Object.assign(wikiLink.data.hProperties, {
width,
height,
});
}
}
} else {
wikiLink.data.hName = 'a';
wikiLink.data.hProperties = {
className: classNames,
href: hrefTemplate(link) + headingId,
};
wikiLink.data.hChildren = [{ type: 'text', value: displayName }];
}
}
return {
enter: {
wikiLink: enterWikiLink,
},
exit: {
wikiLinkTarget: exitWikiLinkTarget,
wikiLinkAlias: exitWikiLinkAlias,
wikiLink: exitWikiLink,
},
};
}
export default remarkWikiLink;
export { remarkWikiLink };
export { fromMarkdown };

View File

@ -38,6 +38,5 @@ const defaultPathToPermalinkFunc = (
.replace(markdownFolder, "") // make the permalink relative to the markdown folder
.replace(/\.(mdx|md)/, "")
.replace(/\\/g, "/") // replace windows backslash with forward slash
.replace(/\/index$/, ""); // remove index from the end of the permalink
return permalink.length > 0 ? permalink : "/"; // for home page
};

View File

@ -1,9 +1,6 @@
import * as path from "path";
// import * as url from "url";
import { getPermalinks } from "../src/utils";
// const __dirname = url.fileURLToPath(new URL(".", import.meta.url));
// const markdownFolder = path.join(__dirname, "/fixtures/content");
const markdownFolder = path.join(
".",
"test/fixtures/content"
@ -12,12 +9,12 @@ const markdownFolder = path.join(
describe("getPermalinks", () => {
test("should return an array of permalinks", () => {
const expectedPermalinks = [
"/", // /index.md
"/README",
"/abc",
"/blog/first-post",
"/blog/Second Post",
"/blog/third-post",
"/blog", // /blog/index.md
"/blog/README",
"/blog/tutorials/first-tutorial",
"/assets/Pasted Image 123.png",
];
@ -28,35 +25,4 @@ describe("getPermalinks", () => {
expect(expectedPermalinks).toContain(permalink);
});
});
test("should return an array of permalinks with custom path -> permalink converter function", () => {
const expectedPermalinks = [
"/", // /index.md
"/abc",
"/blog/first-post",
"/blog/second-post",
"/blog/third-post",
"/blog", // /blog/index.md
"/blog/tutorials/first-tutorial",
"/assets/pasted-image-123.png",
];
const func = (filePath: string, markdownFolder: string) => {
const permalink = filePath
.replace(markdownFolder, "") // make the permalink relative to the markdown folder
.replace(/\.(mdx|md)/, "")
.replace(/\\/g, "/") // replace windows backslash with forward slash
.replace(/\/index$/, "") // remove index from the end of the permalink
.replace(/ /g, "-") // replace spaces with hyphens
.toLowerCase(); // convert to lowercase
return permalink.length > 0 ? permalink : "/"; // for home page
};
const permalinks = getPermalinks(markdownFolder, [/\.DS_Store/], func);
expect(permalinks).toHaveLength(expectedPermalinks.length);
permalinks.forEach((permalink) => {
expect(expectedPermalinks).toContain(permalink);
});
});
});

View File

@ -48,7 +48,7 @@ describe("micromark-extension-wiki-link", () => {
html({
permalinks: ["/some/folder/Wiki Link"],
pathFormat: "obsidian-short",
}) as any // TODO type fix
}) as any, // TODO type fix
],
});
expect(serialized).toBe(
@ -75,7 +75,7 @@ describe("micromark-extension-wiki-link", () => {
html({
permalinks: ["/some/folder/Wiki Link"],
pathFormat: "obsidian-absolute",
}) as any // TODO type fix
}) as any, // TODO type fix
],
});
expect(serialized).toBe(
@ -97,10 +97,14 @@ describe("micromark-extension-wiki-link", () => {
});
test("parses a wiki link with heading and alias", () => {
const serialized = micromark("[[Wiki Link#Some Heading|Alias]]", "ascii", {
extensions: [syntax()],
htmlExtensions: [html() as any], // TODO type fix
});
const serialized = micromark(
"[[Wiki Link#Some Heading|Alias]]",
"ascii",
{
extensions: [syntax()],
htmlExtensions: [html() as any], // TODO type fix
}
);
// note: lowercased and hyphenated heading
expect(serialized).toBe(
'<p><a href="Wiki Link#some-heading" class="internal new">Alias</a></p>'
@ -134,7 +138,7 @@ describe("micromark-extension-wiki-link", () => {
extensions: [syntax()],
htmlExtensions: [html() as any], // TODO type fix
});
expect(serialized).toBe("<p>![[My Image.xyz]]</p>");
expect(serialized).toBe('<p>![[My Image.xyz]]</p>');
});
test("parses and image ambed with a matching permalink", () => {
@ -147,6 +151,28 @@ describe("micromark-extension-wiki-link", () => {
);
});
// TODO: Fix alt attribute
test("Can identify the dimensions of the image if exists", () => {
const serialized = micromark("![[My Image.jpg|200]]", "ascii", {
extensions: [syntax()],
htmlExtensions: [html({ permalinks: ["My Image.jpg"] }) as any], // TODO type fix
});
expect(serialized).toBe(
'<p><img src="My Image.jpg" alt="My Image.jpg" class="internal" width="200" height="200" /></p>'
);
});
// TODO: Fix alt attribute
test("Can identify the dimensions of the image if exists", () => {
const serialized = micromark("![[My Image.jpg|200x200]]", "ascii", {
extensions: [syntax()],
htmlExtensions: [html({ permalinks: ["My Image.jpg"] }) as any], // TODO type fix
});
expect(serialized).toBe(
'<p><img src="My Image.jpg" alt="My Image.jpg" class="internal" width="200" height="200" /></p>'
);
});
test("parses an image embed with a matching permalink and Obsidian-style shortedned path", () => {
const serialized = micromark("![[My Image.jpg]]", {
extensions: [syntax()],
@ -154,7 +180,7 @@ describe("micromark-extension-wiki-link", () => {
html({
permalinks: ["/assets/My Image.jpg"],
pathFormat: "obsidian-short",
}) as any // TODO type fix
}) as any, // TODO type fix
],
});
expect(serialized).toBe(
@ -189,7 +215,7 @@ describe("micromark-extension-wiki-link", () => {
extensions: [syntax()],
htmlExtensions: [html() as any], // TODO type fix
});
expect(serialized).toBe("<p>[[Wiki Link</p>");
expect(serialized).toBe('<p>[[Wiki Link</p>');
});
test("doesn't parse a wiki link with one missing closing bracket", () => {
@ -197,7 +223,7 @@ describe("micromark-extension-wiki-link", () => {
extensions: [syntax()],
htmlExtensions: [html() as any], // TODO type fix
});
expect(serialized).toBe("<p>[[Wiki Link]</p>");
expect(serialized).toBe('<p>[[Wiki Link]</p>');
});
test("doesn't parse a wiki link with a missing opening bracket", () => {
@ -205,7 +231,7 @@ describe("micromark-extension-wiki-link", () => {
extensions: [syntax()],
htmlExtensions: [html() as any], // TODO type fix
});
expect(serialized).toBe("<p>[Wiki Link]]</p>");
expect(serialized).toBe('<p>[Wiki Link]]</p>');
});
test("doesn't parse a wiki link in single brackets", () => {
@ -213,7 +239,7 @@ describe("micromark-extension-wiki-link", () => {
extensions: [syntax()],
htmlExtensions: [html() as any], // TODO type fix
});
expect(serialized).toBe("<p>[Wiki Link]</p>");
expect(serialized).toBe('<p>[Wiki Link]</p>');
});
});
@ -225,7 +251,7 @@ describe("micromark-extension-wiki-link", () => {
html({
newClassName: "test-new",
wikiLinkClassName: "test-wiki-link",
}) as any // TODO type fix
}) as any, // TODO type fix
],
});
expect(serialized).toBe(
@ -251,7 +277,7 @@ describe("micromark-extension-wiki-link", () => {
wikiLinkResolver: (page) => [
page.replace(/\s+/, "-").toLowerCase(),
],
}) as any // TODO type fix
}) as any, // TODO type fix
],
});
expect(serialized).toBe(
@ -260,56 +286,6 @@ describe("micromark-extension-wiki-link", () => {
});
});
test("parses wiki links to index files", () => {
const serialized = micromark("[[/some/folder/index]]", "ascii", {
extensions: [syntax()],
htmlExtensions: [html() as any], // TODO type fix
});
expect(serialized).toBe(
'<p><a href="/some/folder" class="internal new">/some/folder/index</a></p>'
);
});
describe("other", () => {
test("parses a wiki link to some index page in a folder with no matching permalink", () => {
const serialized = micromark("[[/some/folder/index]]", "ascii", {
extensions: [syntax()],
htmlExtensions: [html() as any], // TODO type fix
});
expect(serialized).toBe(
'<p><a href="/some/folder" class="internal new">/some/folder/index</a></p>'
);
});
test("parses a wiki link to some index page in a folder with a matching permalink", () => {
const serialized = micromark("[[/some/folder/index]]", "ascii", {
extensions: [syntax()],
htmlExtensions: [html({ permalinks: ["/some/folder"] }) as any], // TODO type fix
});
expect(serialized).toBe(
'<p><a href="/some/folder" class="internal">/some/folder/index</a></p>'
);
});
test("parses a wiki link to home index page with no matching permalink", () => {
const serialized = micromark("[[/index]]", "ascii", {
extensions: [syntax()],
htmlExtensions: [html() as any], // TODO type fix
});
expect(serialized).toBe(
'<p><a href="/" class="internal new">/index</a></p>'
);
});
test("parses a wiki link to home index page with a matching permalink", () => {
const serialized = micromark("[[/index]]", "ascii", {
extensions: [syntax()],
htmlExtensions: [html({ permalinks: ["/"] }) as any], // TODO type fix
});
expect(serialized).toBe('<p><a href="/" class="internal">/index</a></p>');
});
});
describe("transclusions", () => {
test("parsers a transclusion as a regular wiki link", () => {
const serialized = micromark("![[Some Page]]", "ascii", {
@ -330,5 +306,5 @@ describe("micromark-extension-wiki-link", () => {
});
expect(serialized).toBe(`<p><a href="li nk-w(i)th-àcèô íã_a(n)d_underline!:ª%@'*º$ °~./\\#li-nk-w(i)th-àcèô-íã_a(n)d_underline!:ª%@'*º$-°~./\\" class="internal new">li nk-w(i)th-àcèô íã_a(n)d_underline!:ª%@'*º$ °~./\\#LI NK-W(i)th-àcèô íã_a(n)d_uNdErlinE!:ª%@'*º$ °~./\\</a></p>`);
});
})
});
});

View File

@ -246,6 +246,28 @@ describe("remark-wiki-link", () => {
expect(node.data?.hName).toEqual("img");
expect((node.data?.hProperties as any).src).toEqual("My Image.png");
expect((node.data?.hProperties as any).alt).toEqual("My Image.png");
expect((node.data?.hProperties as any).width).toBeUndefined();
expect((node.data?.hProperties as any).height).toBeUndefined();
});
});
test("Can identify the dimensions of the image if exists", () => {
const processor = unified().use(markdown).use(wikiLinkPlugin);
let ast = processor.parse("![[My Image.png|132x612]]");
ast = processor.runSync(ast);
expect(select("wikiLink", ast)).not.toEqual(null);
visit(ast, "wikiLink", (node: Node) => {
expect(node.data?.isEmbed).toEqual(true);
expect(node.data?.target).toEqual("My Image.png");
expect(node.data?.permalink).toEqual("My Image.png");
expect(node.data?.hName).toEqual("img");
expect((node.data?.hProperties as any).src).toEqual("My Image.png");
expect((node.data?.hProperties as any).alt).toEqual("My Image.png");
expect((node.data?.hProperties as any).width).toBe("132");
expect((node.data?.hProperties as any).height).toBe("612");
});
});
@ -365,13 +387,17 @@ describe("remark-wiki-link", () => {
test("parses a link with special characters and symbols", () => {
const processor = unified().use(markdown).use(wikiLinkPlugin);
let ast = processor.parse("[[li nk-w(i)th-àcèô íã_a(n)d_underline!:ª%@'*º$ °~./\\#li-nk-w(i)th-àcèô íã_a(n)D_UNDERLINE!:ª%@'*º$ °~./\\]]");
let ast = processor.parse(
"[[li nk-w(i)th-àcèô íã_a(n)d_underline!:ª%@'*º$ °~./\\#li-nk-w(i)th-àcèô íã_a(n)D_UNDERLINE!:ª%@'*º$ °~./\\]]"
);
ast = processor.runSync(ast);
expect(select("wikiLink", ast)).not.toEqual(null);
visit(ast, "wikiLink", (node: Node) => {
expect(node.data?.exists).toEqual(false);
expect(node.data?.permalink).toEqual("li nk-w(i)th-àcèô íã_a(n)d_underline!:ª%@'*º$ °~./\\");
expect(node.data?.permalink).toEqual(
"li nk-w(i)th-àcèô íã_a(n)d_underline!:ª%@'*º$ °~./\\"
);
expect(node.data?.alias).toEqual(null);
expect(node.data?.hName).toEqual("a");
expect((node.data?.hProperties as any).className).toEqual(
@ -383,9 +409,9 @@ describe("remark-wiki-link", () => {
expect((node.data?.hChildren as any)[0].value).toEqual(
"li nk-w(i)th-àcèô íã_a(n)d_underline!:ª%@'*º$ °~./\\#li-nk-w(i)th-àcèô íã_a(n)D_UNDERLINE!:ª%@'*º$ °~./\\"
);
})
});
});
})
});
describe("invalid wiki links", () => {
test("doesn't parse a wiki link with two missing closing brackets", () => {
@ -459,109 +485,6 @@ describe("remark-wiki-link", () => {
});
});
test("parses wiki links to index files", () => {
const processor = unified().use(markdown).use(wikiLinkPlugin);
let ast = processor.parse("[[/some/folder/index]]");
ast = processor.runSync(ast);
expect(select("wikiLink", ast)).not.toEqual(null);
visit(ast, "wikiLink", (node: Node) => {
expect(node.data?.exists).toEqual(false);
expect(node.data?.permalink).toEqual("/some/folder");
expect(node.data?.alias).toEqual(null);
expect(node.data?.hName).toEqual("a");
expect((node.data?.hProperties as any).className).toEqual("internal new");
expect((node.data?.hProperties as any).href).toEqual("/some/folder");
expect((node.data?.hChildren as any)[0].value).toEqual(
"/some/folder/index"
);
});
});
describe("other", () => {
test("parses a wiki link to some index page in a folder with no matching permalink", () => {
const processor = unified().use(markdown).use(wikiLinkPlugin);
let ast = processor.parse("[[/some/folder/index]]");
ast = processor.runSync(ast);
visit(ast, "wikiLink", (node: Node) => {
expect(node.data?.exists).toEqual(false);
expect(node.data?.permalink).toEqual("/some/folder");
expect(node.data?.alias).toEqual(null);
expect(node.data?.hName).toEqual("a");
expect((node.data?.hProperties as any).className).toEqual(
"internal new"
);
expect((node.data?.hProperties as any).href).toEqual("/some/folder");
expect((node.data?.hChildren as any)[0].value).toEqual(
"/some/folder/index"
);
});
});
test("parses a wiki link to some index page in a folder with a matching permalink", () => {
const processor = unified()
.use(markdown)
.use(wikiLinkPlugin, { permalinks: ["/some/folder"] });
let ast = processor.parse("[[/some/folder/index]]");
ast = processor.runSync(ast);
visit(ast, "wikiLink", (node: Node) => {
expect(node.data?.exists).toEqual(true);
expect(node.data?.permalink).toEqual("/some/folder");
expect(node.data?.alias).toEqual(null);
expect(node.data?.hName).toEqual("a");
expect((node.data?.hProperties as any).className).toEqual("internal");
expect((node.data?.hProperties as any).href).toEqual("/some/folder");
expect((node.data?.hChildren as any)[0].value).toEqual(
"/some/folder/index"
);
});
});
test("parses a wiki link to home index page with no matching permalink", () => {
const processor = unified().use(markdown).use(wikiLinkPlugin);
let ast = processor.parse("[[/index]]");
ast = processor.runSync(ast);
visit(ast, "wikiLink", (node: Node) => {
expect(node.data?.exists).toEqual(false);
expect(node.data?.permalink).toEqual("/");
expect(node.data?.alias).toEqual(null);
expect(node.data?.hName).toEqual("a");
expect((node.data?.hProperties as any).className).toEqual(
"internal new"
);
expect((node.data?.hProperties as any).href).toEqual("/");
expect((node.data?.hChildren as any)[0].value).toEqual("/index");
});
});
test("parses a wiki link to home index page with a matching permalink", () => {
const processor = unified()
.use(markdown)
.use(wikiLinkPlugin, { permalinks: ["/"] });
let ast = processor.parse("[[/index]]");
ast = processor.runSync(ast);
visit(ast, "wikiLink", (node: Node) => {
expect(node.data?.exists).toEqual(true);
expect(node.data?.permalink).toEqual("/");
expect(node.data?.alias).toEqual(null);
expect(node.data?.hName).toEqual("a");
expect((node.data?.hProperties as any).className).toEqual("internal");
expect((node.data?.hProperties as any).href).toEqual("/");
expect((node.data?.hChildren as any)[0].value).toEqual("/index");
});
});
});
describe("transclusions", () => {
test("replaces a transclusion with a regular wiki link", () => {
const processor = unified().use(markdown).use(wikiLinkPlugin);
@ -586,4 +509,3 @@ describe("remark-wiki-link", () => {
});
});
});

View File

@ -12,7 +12,7 @@ export default function JSONLD({
return <></>;
}
const baseUrl = process.env.NEXT_PUBLIC_SITE_URL || 'https://portaljs.org';
const baseUrl = process.env.NEXT_PUBLIC_SITE_URL || 'https://portaljs.com';
const pageUrl = `${baseUrl}/${meta.urlPath}`;
const imageMatches = source.match(

View File

@ -81,7 +81,6 @@ export default function Layout({
}
return section.children.findIndex(isActive) > -1;
}
return (
<>
{title && <NextSeo title={title} description={description} />}

View File

@ -22,11 +22,41 @@ const items = [
sourceUrl: 'https://github.com/FCSCOpendata/frontend',
},
{
title: 'Datahub Open Data',
href: 'https://opendata.datahub.io/',
image: '/images/showcases/datahub.webp',
description: 'Demo Data Portal by DataHub',
title: 'Frictionless Data',
href: 'https://datahub.io/core/co2-ppm',
repository: 'https://github.com/datopian/datahub/tree/main/examples/dataset-frictionless',
image: '/images/showcases/frictionless-capture.png',
description: 'Progressive open-source framework for building data infrastructure - data management, data integration, data flows, etc. It includes various data standards and provides software to work with data.',
},
{
title: "OpenSpending",
image: "/images/showcases/openspending.png",
href: "https://www.openspending.org",
repository: 'https://github.com/datopian/datahub/tree/main/examples/openspending',
description: "OpenSpending is a free, open and global platform to search, visualise and analyse fiscal data in the public sphere."
},
{
title: "FiveThirtyEight",
image: "/images/showcases/fivethirtyeight.png",
href: "https://fivethirtyeight.portaljs.org/",
repository: 'https://github.com/datopian/datahub/tree/main/examples/fivethirtyeight',
description: "This is a replica of data.fivethirtyeight.com using PortalJS."
},
{
title: "Github Datasets",
image: "/images/showcases/github-datasets.png",
href: "https://example.portaljs.org/",
repository: 'https://github.com/datopian/datahub/tree/main/examples/github-backed-catalog',
description: "A simple data catalog that get its data from a list of GitHub repos that serve as datasets."
},
{
title: "Hatespeech Data",
image: "/images/showcases/turing.png",
href: "https://hatespeechdata.com/",
repository: 'https://github.com/datopian/datahub/tree/main/examples/turing',
description: "Datasets annotated for hate speech, online abuse, and offensive language which are useful for training a natural language processing system to detect this online abuse."
},
];
export default function Showcases() {

View File

@ -1,10 +1,6 @@
export default function ShowcasesItem({ item }) {
return (
<a
className="rounded overflow-hidden group relative border-1 shadow-lg"
target="_blank"
href={item.href}
>
<div className="rounded overflow-hidden group relative border-1 shadow-lg">
<div
className="bg-cover bg-no-repeat bg-top aspect-video w-full group-hover:blur-sm group-hover:scale-105 transition-all duration-200"
style={{ backgroundImage: `url(${item.image})` }}
@ -16,9 +12,48 @@ export default function ShowcasesItem({ item }) {
<div className="text-center text-primary-dark">
<span className="text-xl font-semibold">{item.title}</span>
<p className="text-base font-medium">{item.description}</p>
<div className="flex justify-center mt-2 gap-2 ">
{item.href && (
<a
target="_blank"
className=" text-white w-8 h-8 p-1 bg-primary rounded-full hover:scale-110 transition cursor-pointer z-50"
rel="noreferrer"
href={item.href}
>
<svg
xmlns="http://www.w3.org/2000/svg"
viewBox="0 0 420 420"
stroke="white"
fill="none"
>
<path stroke-width="26" d="M209,15a195,195 0 1,0 2,0z" />
<path
stroke-width="18"
d="m210,15v390m195-195H15M59,90a260,260 0 0,0 302,0 m0,240 a260,260 0 0,0-302,0M195,20a250,250 0 0,0 0,382 m30,0 a250,250 0 0,0 0-382"
/>
</svg>
</a>
)}
{item.repository && (
<a
target="_blank"
rel="noreferrer"
className="w-8 h-8 bg-black rounded-full p-1 hover:scale-110 transition cursor-pointer z-50"
href={item.repository}
>
<svg
aria-hidden="true"
viewBox="0 0 16 16"
fill="currentColor"
>
<path d="M8 0C3.58 0 0 3.58 0 8C0 11.54 2.29 14.53 5.47 15.59C5.87 15.66 6.02 15.42 6.02 15.21C6.02 15.02 6.01 14.39 6.01 13.72C4 14.09 3.48 13.23 3.32 12.78C3.23 12.55 2.84 11.84 2.5 11.65C2.22 11.5 1.82 11.13 2.49 11.12C3.12 11.11 3.57 11.7 3.72 11.94C4.44 13.15 5.59 12.81 6.05 12.6C6.12 12.08 6.33 11.73 6.56 11.53C4.78 11.33 2.92 10.64 2.92 7.58C2.92 6.71 3.23 5.99 3.74 5.43C3.66 5.23 3.38 4.41 3.82 3.31C3.82 3.31 4.49 3.1 6.02 4.13C6.66 3.95 7.34 3.86 8.02 3.86C8.7 3.86 9.38 3.95 10.02 4.13C11.55 3.09 12.22 3.31 12.22 3.31C12.66 4.41 12.38 5.23 12.3 5.43C12.81 5.99 13.12 6.7 13.12 7.58C13.12 10.65 11.25 11.33 9.47 11.53C9.76 11.78 10.01 12.26 10.01 13.01C10.01 14.08 10 14.94 10 15.21C10 15.42 10.15 15.67 10.55 15.59C13.71 14.53 16 11.53 16 8C16 3.58 12.42 0 8 0Z" />
</svg>
</a>
)}
</div>
</div>
</div>
</div>
</a>
</div>
);
}

View File

@ -7,17 +7,17 @@ filetype: 'blog'
This post walks you though adding maps and geospatial visualizations to PortalJS.
Are you interested in building rich and interactive data portals? Do you find value in the power and flexibility of JavaScript, Nextjs, and React? If so, [PortalJS](https://portaljs.org/) is for you. It's a state-of-the-art framework leveraging these technologies to help you build rich data portals.
Are you interested in building rich and interactive data portals? Do you find value in the power and flexibility of JavaScript, Nextjs, and React? If so, [PortalJS](https://portaljs.com/) is for you. It's a state-of-the-art framework leveraging these technologies to help you build rich data portals.
Effective data visualization lies in the use of various data components. Within [PortalJS](https://portaljs.org/), we take data visualization a step further. It's not just about displaying data - it's about telling a story through combining a variety of data components.
Effective data visualization lies in the use of various data components. Within [PortalJS](https://portaljs.com/), we take data visualization a step further. It's not just about displaying data - it's about telling a story through combining a variety of data components.
In this post we will share our latest enhancement to PortalJS: maps, a powerful tool for visualizing geospatial data. In this post, we will to take you on a tour of our experiments and progress in enhancing map functionalities on PortalJS. The journey is still in its early stages, with new facets being unveiled and refined as we perfect our API.
## Exploring Map Formats
Maps play a crucial role in geospatial data visualization. Several formats exist for storing and sharing this type of data, with GeoJSON, KML, and shapefiles being among the most popular. As a prominent figure in the field of open-source data portal platforms, [PortalJS](https://portaljs.org/) strives to support as many map formats as possible.
Maps play a crucial role in geospatial data visualization. Several formats exist for storing and sharing this type of data, with GeoJSON, KML, and shapefiles being among the most popular. As a prominent figure in the field of open-source data portal platforms, [PortalJS](https://portaljs.com/) strives to support as many map formats as possible.
Taking inspiration from the ckanext-geoview extension, we currently support KML and GeoJSON formats in [PortalJS](https://portaljs.org/). This remarkable extension is a plugin for CKAN, the worlds leading open source data management system, that enables users to visualize geospatial data in diverse formats on an interactive map. Apart from KML and GeoJSON formats support, our roadmap entails extending compatibility to encompass all other formats supported by ckanext-geoview. Rest assured, we are committed to empowering users with a wide array of map format options in the future.
Taking inspiration from the ckanext-geoview extension, we currently support KML and GeoJSON formats in [PortalJS](https://portaljs.com/). This remarkable extension is a plugin for CKAN, the worlds leading open source data management system, that enables users to visualize geospatial data in diverse formats on an interactive map. Apart from KML and GeoJSON formats support, our roadmap entails extending compatibility to encompass all other formats supported by ckanext-geoview. Rest assured, we are committed to empowering users with a wide array of map format options in the future.
So, what makes these formats special?
@ -27,7 +27,7 @@ So, what makes these formats special?
## Unveiling the Power of Leaflet and OpenLayers
To display maps in [PortalJS](https://portaljs.org/), we utilize two powerful JavaScript libraries for creating interactive maps based on different layers: Leaflet and OpenLayers. Each offers distinct advantages (and disadvantages), inspiring us to integrate both and give users the flexibility to choose.
To display maps in [PortalJS](https://portaljs.com/), we utilize two powerful JavaScript libraries for creating interactive maps based on different layers: Leaflet and OpenLayers. Each offers distinct advantages (and disadvantages), inspiring us to integrate both and give users the flexibility to choose.
Leaflet is the leading open-source JavaScript library known for its mobile-friendly, interactive maps. With its compact size (just 42 KB of JS), it provides all the map features most developers need. Leaflet is designed with simplicity, performance and usability in mind. It works efficiently across all major desktop and mobile platforms.
@ -59,8 +59,8 @@ Users can also choose a region of focus, which will depend on the data, by setti
Through our ongoing enhancements to the [PortalJS library](https://storybook.portaljs.org/), we aim to empower users to create engaging and informative data portals featuring diverse map formats and data components.
Why not give [PortalJS](https://portaljs.org/) a try today and discover the possibilities for your own data portals? To get started, check out our comprehensive documentation here: [PortalJS Documentation](https://portaljs.org/docs).
Why not give [PortalJS](https://portaljs.com/) a try today and discover the possibilities for your own data portals? To get started, check out our comprehensive documentation here: [PortalJS Documentation](https://portaljs.com/opensource).
Have questions or comments about using [PortalJS](https://portaljs.org/) for your data portals? Feel free to share your thoughts on our [Discord channel](https://discord.com/invite/EeyfGrGu4U). We're here to help you make the most of your data.
Have questions or comments about using [PortalJS](https://portaljs.com/) for your data portals? Feel free to share your thoughts on our [Discord channel](https://discord.com/invite/EeyfGrGu4U). We're here to help you make the most of your data.
Stay tuned for more exciting developments as we continue to enhance [PortalJS](https://portaljs.org/)!
Stay tuned for more exciting developments as we continue to enhance [PortalJS](https://portaljs.com/)!

View File

@ -4,7 +4,7 @@ authors: ['Luccas Mateus']
date: 2021-04-20
---
We have created a full data portal demo using PortalJS all backed by a CKAN instance storing data and metadata, you can see below a screenshot of the homepage and of an individual dataset page.
We have created a full data portal demo using DataHub PortalJS all backed by a CKAN instance storing data and metadata, you can see below a screenshot of the homepage and of an individual dataset page.
![](https://i.imgur.com/ai0VLS4.png)
![](https://i.imgur.com/3RhXOW4.png)
@ -14,7 +14,7 @@ We have created a full data portal demo using PortalJS all backed by a CKAN inst
To create a Portal app, run the following command in your terminal:
```console
npx create-next-app -e https://github.com/datopian/portaljs/tree/main/examples/ckan
npx create-next-app -e https://github.com/datopian/datahub/tree/main/examples/ckan
```
> NB: Under the hood, this uses the tool called create-next-app, which bootstraps an app for you based on our CKAN example.

View File

@ -30,12 +30,12 @@ https://github.com/datopian/markdowndb
## 📚 The Guide
https://portaljs.org/guide
https://portaljs.com/opensource
Ive sketched overviews for two upcoming tutorials:
1. **Collaborating with others on your website**: Learn how to make your website projects a team effort. [See it here](https://portaljs.org/guide#tutorial-3-collaborating-with-others-on-your-website-project)
2. **Customising your website and previewing your changes locally**: Customize and preview your site changes locally, without headaches. [See it here](https://portaljs.org/guide#tutorial-4-customising-your-website-locally-and-previewing-your-changes-locally)
1. **Collaborating with others on your website**: Learn how to make your website projects a team effort. [See it here](https://portaljs.com/guide#tutorial-3-collaborating-with-others-on-your-website-project)
2. **Customising your website and previewing your changes locally**: Customize and preview your site changes locally, without headaches. [See it here](https://portaljs.com/guide#tutorial-4-customising-your-website-locally-and-previewing-your-changes-locally)
## 🌐 LifeItself.org

View File

@ -11,7 +11,7 @@ In our last article, we explored [the Open Spending revamp](https://www.datopian
## The Core: PortalJS
At the core of the revamped OpenSpending website is [PortalJS](https://portaljs.org), a JavaScript library that's a game-changer in building powerful data portals with data visualizations. What makes it so special? Well, it's packed with reusable React components that make our lives - and yours - a whole lot easier. Take, for example, our sleek CSV previews; they're brought to life by PortalJS' [FlatUI Component](https://storybook.portaljs.org/?path=/story/components-flatuitable--from-url). It helps transform raw numbers into visuals that you can easily understand and use. Curious to know more? Check out the [official PortalJS website](https://portaljs.org).
At the core of the revamped OpenSpending website is [PortalJS](https://portaljs.com), a JavaScript library that's a game-changer in building powerful data portals with data visualizations. What makes it so special? Well, it's packed with reusable React components that make our lives - and yours - a whole lot easier. Take, for example, our sleek CSV previews; they're brought to life by PortalJS' [FlatUI Component](https://storybook.portaljs.org/?path=/story/components-flatuitable--from-url). It helps transform raw numbers into visuals that you can easily understand and use. Curious to know more? Check out the [official PortalJS website](https://portaljs.com).
![Data visualization](/assets/blog/2023-10-13-the-open-spending-revamp-behind-the-scenes/data-visualization.png)

View File

@ -11,19 +11,18 @@ const config = {
authorUrl: 'https://datopian.com/',
navbarTitle: {
// logo: "/images/logo.svg",
text: '🌀 PortalJS',
text: '🌀 DataHub PortalJS',
// version: "Alpha",
},
navLinks: [
{ name: 'Docs', href: '/docs' },
// { name: "Components", href: "/docs/components" },
{ name: 'Blog', href: '/blog' },
{ name: 'Showcases', href: '/#showcases' },
{ name: 'Howtos', href: '/howtos' },
{ name: 'Guide', href: '/guide' },
{
name: 'Examples',
href: '/examples/'
name: 'Showcases',
href: '/showcases/'
},
{
name: 'Components',
@ -45,6 +44,7 @@ const config = {
{ rel: 'icon', href: '/favicon.ico' },
{ rel: 'apple-touch-icon', href: '/icon.png', sizes: '120x120' },
],
canonical: 'https://portaljs.com/',
openGraph: {
type: 'website',
title:
@ -68,8 +68,8 @@ const config = {
cardType: 'summary_large_image',
},
},
github: 'https://github.com/datopian/portaljs',
discord: 'https://discord.gg/EeyfGrGu4U',
github: 'https://github.com/datopian/datahub',
discord: 'https://discord.gg/KrRzMKU',
tableOfContents: true,
analytics: 'G-96GWZHMH57',
// editLinkShow: true,

View File

@ -1,249 +0,0 @@
# Authentication
## Introduction
The core function of authentication is to **Identify** Users of the Portal (in a federated way) so we can base access on their identity.
There are 3 major conceptual components: Identity, Accounts and Sessions which come together in the following stages:
* **Root Identity Determination:** Determine Identity often via Delegation
* **Sessions:** Persistence of the identity in the web application in a secure way (without new identity determination on each request! I don't want to have to login via third party service every time)
* **Account (aka profile):** Storing Related Account/Profile Information in our application (not in third party identity) eg. email, name (other preferences)
* This will get auto-created usually at first Identification
* In limited case this can be seen as a cache of info from Identity system (e.g. your email)
* However often richer info that is app specific that is generated (relevant for personalization)
### Root Identity Determination options :key:
The identity determination can be done in multiple ways. In this article we're considering following 3 options that we believe are widely used:
- Password authentication - traditional username and password pair
- Single Sign-on (SSO) via protocols such as OAuth, SAML, OpenID Connect
- One-time password (OTP) via email or SMS (aka passwordless connection)
#### Password authentication
Traditional way of authentication of users. When signing up user provides at least username and password pair which is then stored in a database for future authentication processes. Normally, additional information such as email address, full name etc. is also requested when registering.
Examples of password authentication in popular services:
- GitHub - https://github.com/join
- GitLab - https://gitlab.com/users/sign_up
- NPM - https://www.npmjs.com/signup
#### Single Sign-on (SSO)
The way of delegating identity determination process to some third-party service. Normally, popular social network services are used, e.g., Google, Facebook, Twitter etc. SSO implementations can be done using OAuth or SAML protocols. In addition, there is OpenID Connect protocol which is an extension of OAuth2.0.
- OAuth
- JWT based
- JSON based
- 'webby'
- SAML
- XML based
- SOAP based
- 'enterprisey'
List of OAuth providers:
https://en.wikipedia.org/wiki/List_of_OAuth_providers
Examples of SSO in popular projects:
- https://datahub.io/login
- https://vercel.com/signup
#### One-time password (OTP)
Also known as dynamic password, OTP also solves limitations of traditional password authentication method. Usually, the one time passwords are received via email or SMS.
### Account (aka profile)
- Storage of user profile information (email, fullname, gravatar etc.)
- Retrieving user profile information via API
- Updating profile
- Deleting profile
### Sessions
- Log out: DePersisting the Session
- Invalidating all Sessions: e.g. if a security issue
- Sessions outside of browsers
## Key Job Stories
When a user signs in, I want to know her/his identity so that I can limit access and editing based on who she/he is.
When a user visits the data portal for the first time, I want to provide him/her a way to register easily/quickly so that more people uses the data portal.
When I visit the data portal for the first time, I want to sign up using my existing social network account so that I don't need to remember yet another credentials.
When I'm using the CLI app (or anything else outside browser), I want to be able to login so that I can work from the terminal (e.g., have write access: editing datasets etc.).
[More job stories](#more-job-stories).
## CKAN 2 (CKAN Classic)
### Basic CKAN authentication
In classic system, we have basic CKAN authentication. Below is how registration page looks like:
![CKAN Classic register page](/static/img/docs/dms/ckan-register.png)
Registration flow in CKAN Classic:
```mermaid
sequenceDiagram
user->>ckan: fill in the form and submit
ckan->>ckan: check access (if user can create user)
ckan->>ckan: parse params
ckan->>ckan: check recaptcha
ckan->>ckan: call 'user_create' action
ckan->>ckan.model: add a new user into db
ckan->>ckan: create an activity
ckan->>ckan: log the user
ckan->>user: redirect to dashboard
```
We can extend basic CKAN authentication with:
- LDAP
- https://extensions.ckan.org/extension/ldap/
- https://github.com/NaturalHistoryMuseum/ckanext-ldap
- OAuth - see below
- SAML - https://extensions.ckan.org/extension/saml2/
### CKAN Classic as OAuth client
CKAN Classic can also be used as OAuth client:
- https://github.com/conwetlab/ckanext-oauth2 - this is the only one that's maintained.
- https://github.com/etalab/ckanext-oauth2 - outdated, the one above is based on this.
- https://github.com/okfn/ckanext-oauth - last commit 9 years ago.
- https://github.com/ckan/ckanext-oauth2waad - Windows Azure Active Directory specific and outdated.
How it works:
```mermaid
sequenceDiagram
user->>ckan: request for login via OAuth provider
ckan->>ckan.oauth: raise 401 and call `challenge` function
ckan.oauth->>user: redirect the user to the 3rd party log in page
user->>3rdparty: perform login
3rdparty->>ckan.oauth: redirect to /oauth2/callback with token
ckan.oauth->>3rdparty: call `authenticate` with token
3rdparty->>ckan.oauth: return user info
ckan.oauth->>ckan: if doesn't exist save that info in db or update it
ckan.oauth->>ckan.oauth: add cookies
ckan.oauth->>user: redirect to dashboard
```
## CKAN 3 (Next Gen)
We have considered some of popular and/or modern solutions for identity management that we can implement in CKAN 3:
https://docs.google.com/spreadsheets/d/1qXZyzAbA2NtpnoSZRJ2K_EbaWJnvxkrKVzQ_2rD5eQw/edit#gid=0
Shortlist based on scores from the spreadsheet above:
- Auth0
- AuthN
- Ory/Kratos
Recommendation:
All projects from the shortlist can be considered for a project. It worth to give a try for each of them and find out what works best for your project's needs. Testing out Auth0 should be straightforward and take less than an hour. AuthN and Ory/Kratos would require to build docker images and to run it locally but overall it should not be time consuming.
### Existing work
In datahub.io we have implemented SSO via Google/Github. Below is sequence diagram showing the auth flow with datopian/auth + frontend express app (similar to CKAN 3 frontend):
```mermaid
sequenceDiagram
frontend.login->>auth.authenticate: authenticate(jwt=None,next=/success/...)
auth.authenticate->>frontend.login: failed + here are urls for logging on 3rd party including success
frontend.login->>user: login form with login urls to 3rd party including next url in state
user->>3rdparty: login
3rdparty->>auth.oauth_response: success
auth.oauth_response->>frontend.success: redirect to next url
frontend.success->>auth.authenticate: with valid jwt
auth.authenticate->>frontend.success: valid + here is profile
frontend.success->>frontend.success: decode jwt, check it, then see localstorage
frontend.success->>frontend.dashboard: redirect to dashboard
```
## CKAN 2 to CKAN 3 (aka Next Gen)
How does this conceptual framework map to an evolution of CKAN 2 to CKAN 3?
```mermaid
graph TD
subgraph "CKAN Classic"
Signup["Classic signup, e.g., self-service or by sysadmin"]
Login["Classic login if you're using the classic UI"]
OAuth["OAuth2(ORY/Hydra)"]
end
subgraph "Authentication service (ORY/Kratos)"
SSO["Social Sign-On: Github, Google, Facebook"]
CC["CKAN Classic"]
Admins["Sysadmin users"]
Curators["Data curators"]
Users["Regular users"]
end
subgraph "Frontend v3"
SignupFront["Signup via Kratos"]
LoginFront["Login via Kratos"]
end
SignupFront --"Regular user"--> SSO
LoginFront --"Regular user"--> SSO
LoginFront --"Data curator"--> CC
CC --> Admins
CC --> Curators
SSO --> Users
CC --"Redirect"--> OAuth
OAuth --> Login
```
Sequence diagram of login process:
[![](https://mermaid.ink/img/eyJjb2RlIjoic2VxdWVuY2VEaWFncmFtXG5cdEJyb3dzZXItPj5Gcm9udGVuZDogUmVxdWVzdCB0byBgL2F1dGgvbG9naW5gXG4gIEZyb250ZW5kLT4-S3JhdG9zOiBBdXRoIHJlcXVlc3RcbiAgS3JhdG9zLT4-QnJvd3NlcjogUmVkaXJlY3QgdG8gYC9hdXRoL2xvZ2luP3JlcXVlc3Q9e2lkfWAgcGFyYW1cbiAgQnJvd3Nlci0-PkZyb250ZW5kOiBHZXQgYC9hdXRoL2xvZ2luP3JlcXVlc3Q9e2lkfWBcbiAgRnJvbnRlbmQtPj5LcmF0b3M6IEZldGNoIGRhdGEgZm9yIHJlbmRlcmluZyB0aGUgZm9ybVxuICBLcmF0b3MtPj5Gcm9udGVuZDogTG9naW4gb3B0aW9uc1xuICBGcm9udGVuZC0-PkJyb3dzZXI6IFJlbmRlciB0aGUgbG9naW4gZm9ybSB3aXRoIGF2YWlsYWJsZSBvcHRpb25zXG4gIEJyb3dzZXItPj5Gcm9udGVuZDogU3VwcGx5IGZvcm0gZGF0YVxuICBGcm9udGVuZC0-PktyYXRvczogVmFsaWRhdGUgYW5kIGxvZ2luXG4gIEtyYXRvcy0-PkZyb250ZW5kOiBTZXQgc2Vzc2lvblxuICBGcm9udGVuZC0-PkJyb3dzZXI6IFJlZGlyZWN0IHRvIC9kYXNoYm9hcmRcblxuXG5cdFx0XHRcdFx0IiwibWVybWFpZCI6eyJ0aGVtZSI6ImRlZmF1bHQifSwidXBkYXRlRWRpdG9yIjpmYWxzZX0)](https://mermaid-js.github.io/mermaid-live-editor/#/edit/eyJjb2RlIjoic2VxdWVuY2VEaWFncmFtXG5cdEJyb3dzZXItPj5Gcm9udGVuZDogUmVxdWVzdCB0byBgL2F1dGgvbG9naW5gXG4gIEZyb250ZW5kLT4-S3JhdG9zOiBBdXRoIHJlcXVlc3RcbiAgS3JhdG9zLT4-QnJvd3NlcjogUmVkaXJlY3QgdG8gYC9hdXRoL2xvZ2luP3JlcXVlc3Q9e2lkfWAgcGFyYW1cbiAgQnJvd3Nlci0-PkZyb250ZW5kOiBHZXQgYC9hdXRoL2xvZ2luP3JlcXVlc3Q9e2lkfWBcbiAgRnJvbnRlbmQtPj5LcmF0b3M6IEZldGNoIGRhdGEgZm9yIHJlbmRlcmluZyB0aGUgZm9ybVxuICBLcmF0b3MtPj5Gcm9udGVuZDogTG9naW4gb3B0aW9uc1xuICBGcm9udGVuZC0-PkJyb3dzZXI6IFJlbmRlciB0aGUgbG9naW4gZm9ybSB3aXRoIGF2YWlsYWJsZSBvcHRpb25zXG4gIEJyb3dzZXItPj5Gcm9udGVuZDogU3VwcGx5IGZvcm0gZGF0YVxuICBGcm9udGVuZC0-PktyYXRvczogVmFsaWRhdGUgYW5kIGxvZ2luXG4gIEtyYXRvcy0-PkZyb250ZW5kOiBTZXQgc2Vzc2lvblxuICBGcm9udGVuZC0-PkJyb3dzZXI6IFJlZGlyZWN0IHRvIC9kYXNoYm9hcmRcblxuXG5cdFx0XHRcdFx0IiwibWVybWFpZCI6eyJ0aGVtZSI6ImRlZmF1bHQifSwidXBkYXRlRWRpdG9yIjpmYWxzZX0)
From ORY/Kratos:
[![](https://mermaid.ink/img/eyJjb2RlIjoic2VxdWVuY2VEaWFncmFtXG4gIHBhcnRpY2lwYW50IEIgYXMgQnJvd3NlclxuICBwYXJ0aWNpcGFudCBLIGFzIE9SWSBLcmF0b3NcbiAgcGFydGljaXBhbnQgQSBhcyBZb3VyIEFwcGxpY2F0aW9uXG5cblxuICBCLT4-SzogSW5pdGlhdGUgTG9naW5cbiAgSy0-PkI6IFJlZGlyZWN0cyB0byB5b3VyIEFwcGxpY2F0aW9uJ3MgL2xvZ2luIGVuZHBvaW50XG4gIEItPj5BOiBDYWxscyAvbG9naW5cbiAgQS0tPj5LOiBGZXRjaGVzIGRhdGEgdG8gcmVuZGVyIGZvcm1zIGV0Y1xuICBCLS0-PkE6IEZpbGxzIG91dCBmb3JtcywgY2xpY2tzIGUuZy4gXCJTdWJtaXQgTG9naW5cIlxuICBCLT4-SzogUE9TVHMgZGF0YSB0b1xuICBLLS0-Pks6IFByb2Nlc3NlcyBMb2dpbiBJbmZvXG5cbiAgYWx0IExvZ2luIGRhdGEgdmFsaWRcbiAgICBLLS0-PkI6IFNldHMgc2Vzc2lvbiBjb29raWVcbiAgICBLLT4-QjogUmVkaXJlY3RzIHRvIGUuZy4gRGFzaGJvYXJkXG4gIGVsc2UgTG9naW4gZGF0YSBpbnZhbGlkXG4gICAgSy0tPj5COiBSZWRpcmVjdHMgdG8geW91ciBBcHBsaWNhaXRvbidzIC9sb2dpbiBlbmRwb2ludFxuICAgIEItPj5BOiBDYWxscyAvbG9naW5cbiAgICBBLS0-Pks6IEZldGNoZXMgZGF0YSB0byByZW5kZXIgZm9ybSBmaWVsZHMgYW5kIGVycm9yc1xuICAgIEItLT4-QTogRmlsbHMgb3V0IGZvcm1zIGFnYWluLCBjb3JyZWN0cyBlcnJvcnNcbiAgICBCLT4-SzogUE9TVHMgZGF0YSBhZ2FpbiAtIGFuZCBzbyBvbi4uLlxuICBlbmRcbiIsIm1lcm1haWQiOnsidGhlbWUiOiJuZXV0cmFsIiwic2VxdWVuY2VEaWFncmFtIjp7ImRpYWdyYW1NYXJnaW5YIjoxNSwiZGlhZ3JhbU1hcmdpblkiOjE1LCJib3hUZXh0TWFyZ2luIjowLCJub3RlTWFyZ2luIjoxNSwibWVzc2FnZU1hcmdpbiI6NDUsIm1pcnJvckFjdG9ycyI6dHJ1ZX19fQ)](https://mermaid-js.github.io/mermaid-live-editor/#/edit/eyJjb2RlIjoic2VxdWVuY2VEaWFncmFtXG4gIHBhcnRpY2lwYW50IEIgYXMgQnJvd3NlclxuICBwYXJ0aWNpcGFudCBLIGFzIE9SWSBLcmF0b3NcbiAgcGFydGljaXBhbnQgQSBhcyBZb3VyIEFwcGxpY2F0aW9uXG5cblxuICBCLT4-SzogSW5pdGlhdGUgTG9naW5cbiAgSy0-PkI6IFJlZGlyZWN0cyB0byB5b3VyIEFwcGxpY2F0aW9uJ3MgL2xvZ2luIGVuZHBvaW50XG4gIEItPj5BOiBDYWxscyAvbG9naW5cbiAgQS0tPj5LOiBGZXRjaGVzIGRhdGEgdG8gcmVuZGVyIGZvcm1zIGV0Y1xuICBCLS0-PkE6IEZpbGxzIG91dCBmb3JtcywgY2xpY2tzIGUuZy4gXCJTdWJtaXQgTG9naW5cIlxuICBCLT4-SzogUE9TVHMgZGF0YSB0b1xuICBLLS0-Pks6IFByb2Nlc3NlcyBMb2dpbiBJbmZvXG5cbiAgYWx0IExvZ2luIGRhdGEgdmFsaWRcbiAgICBLLS0-PkI6IFNldHMgc2Vzc2lvbiBjb29raWVcbiAgICBLLT4-QjogUmVkaXJlY3RzIHRvIGUuZy4gRGFzaGJvYXJkXG4gIGVsc2UgTG9naW4gZGF0YSBpbnZhbGlkXG4gICAgSy0tPj5COiBSZWRpcmVjdHMgdG8geW91ciBBcHBsaWNhaXRvbidzIC9sb2dpbiBlbmRwb2ludFxuICAgIEItPj5BOiBDYWxscyAvbG9naW5cbiAgICBBLS0-Pks6IEZldGNoZXMgZGF0YSB0byByZW5kZXIgZm9ybSBmaWVsZHMgYW5kIGVycm9yc1xuICAgIEItLT4-QTogRmlsbHMgb3V0IGZvcm1zIGFnYWluLCBjb3JyZWN0cyBlcnJvcnNcbiAgICBCLT4-SzogUE9TVHMgZGF0YSBhZ2FpbiAtIGFuZCBzbyBvbi4uLlxuICBlbmRcbiIsIm1lcm1haWQiOnsidGhlbWUiOiJuZXV0cmFsIiwic2VxdWVuY2VEaWFncmFtIjp7ImRpYWdyYW1NYXJnaW5YIjoxNSwiZGlhZ3JhbU1hcmdpblkiOjE1LCJib3hUZXh0TWFyZ2luIjowLCJub3RlTWFyZ2luIjoxNSwibWVzc2FnZU1hcmdpbiI6NDUsIm1pcnJvckFjdG9ycyI6dHJ1ZX19fQ)
Kratos to Hydra in CKAN Classic:
WIP
Questions
* Does CKAN Classic allow us to store arbitrary account information (are there "extras")
* How would we avoid having to support identity persistence, delegation etc in both NG frontend and Classic Admin UI?
* Can we share cookies (e.g. via using subdomains)
* How is login, identity determination etc done at least for frontend in DataHub.io
* Should account UI really be in NG frontend vs Classic Admin UI?
* how can we handle "invite a user" to my org set up ... (it's basically post processing after sign up ...)
## Appendix
### More job stories
When a user visits the data portal, I want to provide multiple options for him/her to sign up so that I have more users registered and using the data portal.
When a user needs to change his/her profile info, I want to make sure it is possible, so that I have the up-to-date information about users.
When my personal info (email etc.) is changed, I want to edit it in my profile so that I provide up-to-date information about me and I receive messages (eg, notifications) properly.
When I decide to stop using the data portal, I want to be able to delete my account, so that my personal details aren't stored in the service that I don't need anymore.

View File

@ -1,215 +0,0 @@
# Blob Storage
## Introduction
DMS and data portals often need to *store* data as well as metadata. As such, they require a system for doing this. This page focuses on Blob Storage aka Bulk or Raw storage (see [storage](/docs/dms/storage) page for an overview of all types of storage).
Blob storage is for storing "blobs" of data, that is a raw stream of bytes like files on a filesystem. For blob storage think local filesystem or cloud storage like S3, GCS, etc.
Blob Storage in a DMS can be provided via:
* Local file system: storing on disk or storage directly connected to the instance
* Cloud storage like S3, Google Cloud Storage, Azure storage etc
Today, cloud storage would be the default in most cases.
### Features
* Storage: Persistent, cost-efficient storage
* Download: Fast, reliable download (possibly even with support for edge distribution)
* Upload: reliable and rapid upload
* Direct upload to (cloud) storage by clients i.e. without going via the DMS. Why? Because cloud storage has many features that it would be costly replicate (e.g. multipart, resumable etc), excellent performance and reliability for upload. It also cuts out the middleman of the DMS backend thereby saving bandwidth, reducing load on the DMS backend and improving performance
* Upload UI: having an excellent UI for doing upload. NB: this UI is considered part of the [publish feature](/docs/dms/publish)
* Cloud: integrate with cloud storage
* Permissions: restricting access to data stored in blob storage based on the permissions of the DMS. For example, if Joe does not have access to a dataset on the DMS he should not be able to access associated blob data in the storage system
## Flows
### Direct to Cloud Upload
Want: Direct upload to cloud storage ... But you need to authorize that ... So give them a token from your app
A sequence diagram illustrating the process for a direct to cloud upload:
```mermaid
sequenceDiagram
participant Browser as Client (Browser / Code)
participant Authz as Authz Server
participant BitStore as Storage Access Token Service
participant Storage as Cloud Storage
Browser->>Authz: Give me a BitStore access token
Authz->>Browser: Token
Browser->>BitStore: Get a signed upload URL (access token, file metdata)
BitStore->>Browser: Signed URL
Browser->>Storage: Upload file (signed URL)
Storage->>Browser: OK (storage metadata)
```
Here's a more elaborate version showing storage of metadata into the MetaStore afterwards (and skipping the Authz service):
```mermaid
sequenceDiagram
participant browser as Client (Browser / Code)
participant vfts as MetaStore
participant bitstore as Storage Access Token Service
participant storage as Cloud Storage
browser->>browser: Select files to upload
browser->>browser: calculate file hashes (if doing content addressable)
browser->bitstore: get signed URLs(file1.csv URL, file2.csv URL, auth info)
bitstore->>browser: signed URLs
browser->>storage: upload file1.csv
storage->>browser: OK
browser->>storage: upload file2.csv
storage->>browser: OK
browser->>browser: Compose datapackage.json
browser->>vfts: create dataset(datapackage.json, file1.csv pointer, file2.csv pointer, jwt token, ...)
vfts->>browser: OK
```
## CKAN 2 (Classic)
Blob Storage is known as the FileStore in CKAN v2 and below. The default is local disk storage.
There is support for cloud storage via a variety of extensions the most prominent of which is `ckanext-cloudstorage`: https://github.com/TkTech/ckanext-cloudstorage
There are a variety of issues:
* Cloud storage is not a first class citizen in CKAN: CKAN defaults to local file storage but cloud storage is the default in the world and has much better scalability, performance as well as integratability with cloud deployment
* The FileStore interface definition has a poor separation of concerns (for example, blob storage file paths is set in the FileStore component not in core CKAN) which makes it hard / hacky to extend and use for key use cases e.g. versioning.
* `ckanext-cloudstorage` (the default cloud storage extension) is ok but has many issues e.g.
* No direct to cloud upload: it uses CKAN backend as a middleman so all data must go via ckan backend
* Implements its own (sometimes unreliable) version of multipart upload (which means additional code which isn't as reliable as cloud storage providers interface)
* No access to advanced features such as resumability etc
Generally, we at Datopian have seen a lot of issues around multipart / large file upload stability with clients and are still seeing issues when a lot of large files are uploaded via scripts. Fixing and refactoring code related to storage is very costly, and tends to result in client specific "hacks".
## CKAN v3
An approach to blob storage that leverages cloud blob storage directly (i.e. without having to upload and serve all files via the CKAN web server), unlocking the performance characteristics of the storage backend directly. It is designed with a microservice approach and supports direct to cloud uploads and downloads. The key components are listed in the next section. You can read more about the overall design approach in the [design section below](#Design).
It is backwards compatible with CKAN v2 and has been successfully deployed with CKAN v2.8 and v2.9.
**Status: Production.**
### Components
* [ckanext-blob-storage](https://github.com/datopian/ckanext-blob-storage) (formerly known as ckanext-external-storage)
* Hooking CKAN to Giftless replacing resource storage
* Depends on giftless-client and ckanext-authz-service
* Doesn't implement IUploader - completely overrides upload / download routes for resources
* [Giftless](https://github.com/datopian/giftless) - Git LFS compatible implementation for storage with some extras on top. This hands out access tokens to store data in cloud storage.
* Docs at https://giftless.datopian.com
* Backends for Azure, Google Cloud Storage and local
* Multipart support (on top of standard LFS protocol)
* Accepts JWT tokens for authentication and authorization
* [ckanext-authz-service](https://github.com/datopian/ckanext-authz-service/) - This extension uses CKANs built-in authentication and authorization capabilities to: a) Generate JWT tokens and provide them via CKANs Web API to clients and b) Validate JWT tokens.
* Allows hooking CKAN's authentication and authorization capabilities to generate signed JWT tokens, to integrate with external systems
* Not specific for Giftless, but this is what it was built for
* [ckanext-asset-storage](https://github.com/datopian/ckanext-asset-storage) - this takes care of storing non-data assets e.g. organization images etc.
* CKAN IUploader for assets (not resources!)
* Pluggable backends - currently local and Azure
* Much cleaner than older implementations (ckanext-cloudstorage etc.)
Clients:
* [giftless-client-py](https://github.com/datopian/giftless-client) - Python client for Git LFS and Giftless-specific features
* Used by ckanext-blob-storage and other tools
* [giftless-client-js](https://github.com/datopian/giftless-client-js) - Javascript client for Git LFS and Giftless-specific features
* Used by ckanext-blob-storage and other tools for creating uploaders in the UI
## Design
### Purpose
The goal of this project is to create a more **_flexible_** system for storing **_data files_** (AKA “resources”) for **_CKAN_ and _other implementations_** of a data portal so that CKAN can support versioning, large file upload (and great file upload UX), plug easily into cloud and local file storage backends and, in general, is easy to customize both for storage layer and for CKAN client code of that layer
### Features
* Do one thing and do it well: provide an API to store and retrieve files from storage, in a way that is pluggable into a micro-services based application and to existing CKAN (2.8 / 2.9)
* Does not force, and in fact is not aware of, a specific file naming logic (i.e. resource file names could be based on a user given name, a content hash, a revision ID or any mixture of these - it is up to the using system to decide)
* Does not force a specific storage backend; Should support Amazon S3, Azure Storage and local file storage in some way initially but in general backend should be pluggable
* Does not force a specific authentication scheme; Expects a signed JWT token, does not care who signed it and how the user got authenticated
* Does not force complex authorization scheme; Leave it to external system to do complex authorization if needed;
* By default, the system can work in an “admin party” mode where all authenticated users have full access to all files. This will be “good enough” for many DMS implementations including CKAN.
* Potentially, allow plugging in a more complex authorization logic that relies on JWT claims to perform granular authorization checks
### For Data Files (i.e. Blobs)
This system is about storing and providing access to blobs, or streams of bytes; It is not about providing access to the data stored within (i.e. it is not meant to replace CKANs datastore).
### For CKAN whilst not necessarily CKAN Specific
While the systems design should not be CKAN specific in any way, our current client needs require us to provide a CKAN extension that integrates with this system.
CKANs current IUploader interface has been identified to be too narrow to provide the functionality required by complex projects (resource versioning, direct cloud uploads and downloads, large file support and multipart support). While some of these needs could be and have been “hacked” through the IUploader interface, the implementations have been over complex and hard to debug.
Our goal should be to provide a CKAN extension that provides the following functionality directly:
* Uploading and downloading resource files directly from the client if supported by the storage backend
* Multipart upload support if supported by storage backend
* Handling of signed URLs for uploads and private downloads
* Client side code for handling multipart uploads
* TBD: If storage backend does not support direct uploads / downloads, fall back to …
In addition, this extension should provide an API for other extensions to do things like:
* Set the file naming scheme (We need this for ckanext-versions)
* Lower level file access, e.g. move and delete files. We may need this in the future to optimize storage and deduplicate files as proposed for ckanext-versions
In addition, this extension must “play nice” with common CKAN features such as the datastore extension and related datapusher / xloader extensions.
### Usable For other DMS implementations
There should be nothing in this system, except for the CKAN extension described above, that is specific to CKAN. That will allow to re-use and re-integrate this system as a micro-service in other DMS implementations such as ckan-ng and others.
In fact, the core part of this system should be a generic, abstract storage service with a light authorization layer. This could make it useful in a host of situations where storage micro-service is needed.
### High Level Principles
Common Principles
* Uploads and downloads directly from cloud provides to browser
* Signed uploads / downloads - for private / authorized only data access
* Support for AWS, Azure and potentially GCP storage
* Support for local (non cloud) storage, potentially through a system like [https://min.io/](https://min.io/)
* Multipart / large file upload support (a few GB in size should be supported for Gates)
* Not opinionated about file naming / paths; Allow users to set file locations under some pre-defined patchs / buckets
* Client side support - browser widgets / code for uploading and downloading files / multipart uploads directly to different backends
* Well-documented flow for using from API (not browser)
* Provided API for deleting and moving files
* Provided API for accessing storage-level metadata (e.g. file MD5) (do we need this could be useful for processes that do things like deduplicate storage)
* Provided API for managing storage-level object level settings (e.g. “Content-disposition” / “Content-type” headers, etc.)
* Authorization based on some kind of portable scheme (JWT)
CKAN integration specific (implemented as a CKAN extension)
* JWT generation based on current CKAN user permissions
* Client widgets integration (or CKAN specific widgets) in right places in CKAN templates
* Hook into resource upload / download / deletion controllers in CKAN
* API to allow other extensions to control storage level object metadata (headers, path)
* API to allow other extensions to hook into lifecycle events - upload completion, download request, deletion etc.
### Components
The Decoupled Storage solution should be split into several parts, with some parts being independent of others:
* [External] Cloud Storage service (or API similar if local file system) e.g. S3, GCS, Azure Storage, Min.io (for local file system)
* Cloud Storage Access Service
* [External] Permissions Service for granting general permission tokens that give access to Cloud Storage Access Service
* JWT tokens can be generated by any party that has the right signing key. Thus, we can initially do without this if JWT signing is implemented as part of the CKAN extension
* Browser based Client for Cloud Storage (compatible with #1 and with different cloud vendors)
* CKAN extension that wraps the two parts above to provide a storage solution for CKAN
### Questions
* What is file structure in cloud ... i.e. What is the file path for uploaded files? Options:
* Client chooses a name/path
* Content addressable i.e. the name is given by the content? How? Use a hash.]
* Beauty of that: standard way to name things. The same thing has the same name (modulo collisions)
* Goes with versioning => same file = same name, diff file = diff name
* And do you enforce that from your app
* Request for token needs to include the destination file path

View File

@ -1,503 +0,0 @@
# CKAN Client Guide
Guide to interacting with [CKAN](/docs/dms/ckan) for power users such as data scientists, data engineers and data wranglers.
This guide is about adding and managing data in CKAN programmatically and it assumes:
* You are familiar with key concepts like metadata, data, etc.
* You are working programmatically with a programming language such as Python, JavaScript or R (_coming soon_).
## Frictionless Formats
Clients use [Frictionless formats](https://specs.frictionlessdata.io/) by default for describing dataset and resource objects passed to client methods. Internally, we then use the a *CKAN {'<=>'} Frictionless Mapper* (both [in JavaScript]( https://github.com/datopian/frictionless-ckan-mapper-js ) and [in Python](https://github.com/frictionlessdata/frictionless-ckan-mapper)) to convert objects to CKAN formats before calling the API. **Thus, you can use _Frictionless Formats_ by default with the client**.
>[!tip]As CKAN moves to Frictionless to default this will gradually become unnecessary.
## Quick start
Most of this guide has Python programming language in mind, including its [convention regading using _snake case_ for instances and methods names](https://www.python.org/dev/peps/pep-0008/#descriptive-naming-styles).
If needed, you can adapt the instructions to JavaScript and R (coming soon) by using _camel case_ instead — for example, if in the Python code we have `client.push_blob(…)`, in JavaScript it would be `client.pushBlob(…)`.
### Prerequisites
Install the client for your language of choice:
* Python: https://github.com/datopian/ckan-client-py#install
* JavaScript: https://github.com/datopian/ckan-client-js#install
* R: _coming soon_
### Create a client
#### Python
```python
from ckanclient import Client
api_key = '771a05ad-af90-4a70-beea-cbb050059e14'
api_url = 'http://localhost:5000'
organization = 'datopian'
dataset = 'dailyprices'
lfs_url = 'http://localhost:9419'
client = Client(api_url, organization, dataset, lfs_url)
```
#### JavaScript
```javascript
const { Client } = require('ckanClient')
apiKey = '771a05ad-af90-4a70-beea-cbb050059e14'
apiUrl = 'http://localhost:5000'
organization = 'datopian'
dataset = 'dailyprices'
const client = Client(apiKey, organization, dataset, apiUrl)
```
### Upload a resource
That is to say, upload a file, implicitly creating a new dataset.
#### Python
```python
from frictionless import describe
resource = describe('my-data.csv')
client.push_blob(resource)
```
### Create a new empty Dataset with metadata
#### Python
```python
client.create('my-data')
client.push(resource)
```
### Adding a resource to an existing Dataset
>[!note]Not implemented yet.
```python
client.create('my-data')
client.push_resource(resource)
```
### Edit a Dataset's metadata
>[!note]Not implemented yet.
```python
dataset = client.retrieve('sample-dataset')
client.update_metadata(
dataset,
metadata: {'maintainer_email': 'sample@datopian.com'}
)
```
For details of metadata see the [metadata reference below](#metadata-reference).
## API - Porcelain
### `Client.create`
Expects as a single argument: a _string_, or a _dict_ (in Python), or an _object_ (in JavaScript). This argument is either a valid dataset name or dictionary with metadata for the dataset in Frictionless format.
### `Client.push`
Expects a single argument: a _dict_ (in Python) or an _object_ (in JavaScript) with a dataset metadata in Frictionless format.
### `Client.retrieve`
Expects a single argument: a string with a dataset name or uniquer ID. Returns a Frictionless resource as a _dict_ (in Python) or as an _Promisse .&lt;object&gt;_ (in JavaScript).
### `Client.push_blob`
Expects a single argument: a _dict_ (in Python) or an _object_ (in JavaScript) with a Frictionless resource.
## API - Plumbing
### `Client.action`
This method bridges access to the CKAN API _action endpoint_.
#### In Python
Arguments:
| Name | Type | Default | Description |
| -------------------- | ---------- | ---------- | ------------------------------------------------------------ |
| `name` | `str` | (required) | The action name, for example, `site_read`, `package_show`… |
| `payload` | `dict` | (required) | The payload being sent to CKAN. When a payload is provided to a GET request, it will be converted to URL parameters and each key will be converted to snake case. |
| `http_get` | `bool` | `False` | Optional, if `True` will make `GET` request, otherwise `POST`. |
| `transform_payload` | `function` | `None` | Function to mutate the `payload` before making the request (useful to convert to and from CKAN and Frictionless formats). |
| `transform_response` | `function` | `None` | function to mutate the response data before returning it (useful to convert to and from CKAN and Frictionless formats). |
>[!note]The CKAN API uses the CKAN dataset and resource formats (rather than Frictionless formats).
In other words, to stick to Frictionless formats, you can pass `frictionless_ckan_mapper.frictionless_to_ckan` as `transform_payload`, and `frictionless_ckan_mapper.ckan_to_frictionless` as `transform_response`.
#### In JavaScript
Arguments:
| Name | Type | Default | Description |
| ------------ | ------------------- | ------------------ | ------------------------------------------------------------ |
| `actionName` | <code>string</code> | (required) | The action name, for example, `site_read`, `package_show`… |
| `payload` | <code>object</code> | (required) | The payload being sent to CKAN. When a payload is provided to a GET request, it will be converted to URL parameters and each key will be converted to snake case. |
| `useHttpGet` | <code>object</code> | <code>false</code> | Optional, if `True` will make `GET` request, otherwise `POST`. |
>[!note]The JavaScript implementation uses the CKAN dataset and resource formats (rather than Frictionless formats).
In other words, to stick to Frictionless formats, you need to convert from Frictionless to CKAN before calling `action` , and from CKAN to Frictionless after calling `action`.
## Metadata reference
>[!info]Your site may have custom metadata that differs from the example set below.
### Profile
**(`string`)** Defaults to _data-resource_.
The profile of this descriptor.
Every Package and Resource descriptor has a profile. The default profile, if none is declared, is `data-package` for Package and `data-resource` for Resource.
#### Examples
- `{"profile":"tabular-data-package"}`
- `{"profile":"http://example.com/my-profiles-json-schema.json"}`
### Name
**(`string`)**
An identifier string. Lower case characters with `.`, `_`, `-` and `/` are allowed.
This is ideally a url-usable and human-readable name. Name `SHOULD` be invariant, meaning it `SHOULD NOT` change when its parent descriptor is updated.
#### Example
- `{"name":"my-nice-name"}`
### Path
A reference to the data for this resource, as either a path as a string, or an array of paths as strings. of valid URIs.
The dereferenced value of each referenced data source in `path` `MUST` be commensurate with a native, dereferenced representation of the data the resource describes. For example, in a *Tabular* Data Resource, this means that the dereferenced value of `path` `MUST` be an array.
#### Validation
##### It must satisfy one of these conditions
###### Path
**(`string`)**
A fully qualified URL, or a POSIX file path..
Implementations need to negotiate the type of path provided, and dereference the data accordingly.
**Examples**
- `{"path":"file.csv"}`
- `{"path":"http://example.com/file.csv"}`
**(`array`)**
**Examples**
- `["file.csv"]`
- `["http://example.com/file.csv"]`
#### Examples
- `{"path":["file.csv","file2.csv"]}`
- `{"path":["http://example.com/file.csv","http://example.com/file2.csv"]}`
- `{"path":"http://example.com/file.csv"}`
### Data
Inline data for this resource.
### Schema
**(`object`)**
A schema for this resource.
### Title
**(`string`)**
A human-readable title.
#### Example
- `{"title":"My Package Title"}`
### Description
**(`string`)**
A text description. Markdown is encouraged.
#### Example
- `{"description":"# My Package description\nAll about my package."}`
### Home Page
**(`string`)**
The home on the web that is related to this data package.
#### Example
- `{"homepage":"http://example.com/"}`
### Sources
**(`array`)**
The raw sources for this resource.
#### Example
- `{"sources":[{"title":"World Bank and OECD","path":"http://data.worldbank.org/indicator/NY.GDP.MKTP.CD"}]}`
### Licenses
**(`array`)**
The license(s) under which the resource is published.
This property is not legally binding and does not guarantee that the package is licensed under the terms defined herein.
#### Example
- `{"licenses":[{"name":"odc-pddl-1.0","path":"http://opendatacommons.org/licenses/pddl/","title":"Open Data Commons Public Domain Dedication and License v1.0"}]}`
### Format
**(`string`)**
The file format of this resource.
`csv`, `xls`, `json` are examples of common formats.
#### Example
- `{"format":"xls"}`
### Media Type
**(`string`)**
The media type of this resource. Can be any valid media type listed with [IANA](https://www.iana.org/assignments/media-types/media-types.xhtml).
#### Example
- `{"mediatype":"text/csv"}`
### Encoding
**(`string`)** Defaults to _utf-8_.
The file encoding of this resource.
#### Example
- `{"encoding":"utf-8"}`
### Bytes
**(`integer`)**
The size of this resource in bytes.
#### Example
- `{"bytes":2082}`
### Hash
**(`string`)**
The MD5 hash of this resource. Indicate other hashing algorithms with the {'{algorithm}'}:{'{hash}'} format.
#### Examples
- `{"hash":"d25c9c77f588f5dc32059d2da1136c02"}`
- `{"hash":"SHA256:5262f12512590031bbcc9a430452bfd75c2791ad6771320bb4b5728bfb78c4d0"}`
## Generating templates
You can use [`jsv`](https://github.com/datopian/jsv) to generate a template script in Python, JavaScript, and R.
To install it:
```
$ npm install -g git+https://github.com/datopian/jsv.git
```
### Python
```
$ jsv data-resource.json --output py
```
**Output**
```python
dataset_metadata = {
"profile": "data-resource", # The profile of this descriptor.
# [example] "profile": "tabular-data-package"
# [example] "profile": "http://example.com/my-profiles-json-schema.json"
"name": "my-nice-name", # An identifier string. Lower case characters with `.`, `_`, `-` and `/` are allowed.
"path": ["file.csv","file2.csv"], # A reference to the data for this resource, as either a path as a string, or an array of paths as strings. of valid URIs.
# [example] "path": ["http://example.com/file.csv","http://example.com/file2.csv"]
# [example] "path": "http://example.com/file.csv"
"data": None, # Inline data for this resource.
"schema": None, # A schema for this resource.
"title": "My Package Title", # A human-readable title.
"description": "# My Package description\nAll about my package.", # A text description. Markdown is encouraged.
"homepage": "http://example.com/", # The home on the web that is related to this data package.
"sources": [{"title":"World Bank and OECD","path":"http://data.worldbank.org/indicator/NY.GDP.MKTP.CD"}], # The raw sources for this resource.
"licenses": [{"name":"odc-pddl-1.0","path":"http://opendatacommons.org/licenses/pddl/","title":"Open Data Commons Public Domain Dedication and License v1.0"}], # The license(s) under which the resource is published.
"format": "xls", # The file format of this resource.
"mediatype": "text/csv", # The media type of this resource. Can be any valid media type listed with [IANA](https://www.iana.org/assignments/media-types/media-types.xhtml).
"encoding": "utf-8", # The file encoding of this resource.
# [example] "encoding": "utf-8"
"bytes": 2082, # The size of this resource in bytes.
"hash": "d25c9c77f588f5dc32059d2da1136c02", # The MD5 hash of this resource. Indicate other hashing algorithms with the {algorithm}:{hash} format.
# [example] "hash": "SHA256:5262f12512590031bbcc9a430452bfd75c2791ad6771320bb4b5728bfb78c4d0"
}
```
### JavaScript
```
$ jsv data-resource.json --output js
```
**Output**
```javascript
const datasetMetadata = {
// The profile of this descriptor.
profile: "data-resource",
// [example] profile: "tabular-data-package"
// [example] profile: "http://example.com/my-profiles-json-schema.json"
// An identifier string. Lower case characters with `.`, `_`, `-` and `/` are allowed.
name: "my-nice-name",
// A reference to the data for this resource, as either a path as a string, or an array of paths as strings. of valid URIs.
path: ["file.csv", "file2.csv"],
// [example] path: ["http://example.com/file.csv","http://example.com/file2.csv"]
// [example] path: "http://example.com/file.csv"
// Inline data for this resource.
data: null,
// A schema for this resource.
schema: null,
// A human-readable title.
title: "My Package Title",
// A text description. Markdown is encouraged.
description: "# My Package description\nAll about my package.",
// The home on the web that is related to this data package.
homepage: "http://example.com/",
// The raw sources for this resource.
sources: [
{
title: "World Bank and OECD",
path: "http://data.worldbank.org/indicator/NY.GDP.MKTP.CD",
},
],
// The license(s) under which the resource is published.
licenses: [
{
name: "odc-pddl-1.0",
path: "http://opendatacommons.org/licenses/pddl/",
title: "Open Data Commons Public Domain Dedication and License v1.0",
},
],
// The file format of this resource.
format: "xls",
// The media type of this resource. Can be any valid media type listed with [IANA](https://www.iana.org/assignments/media-types/media-types.xhtml).
mediatype: "text/csv",
// The file encoding of this resource.
encoding: "utf-8",
// [example] encoding: "utf-8"
// The size of this resource in bytes.
bytes: 2082,
// The MD5 hash of this resource. Indicate other hashing algorithms with the {algorithm}:{hash} format.
hash: "d25c9c77f588f5dc32059d2da1136c02",
// [example] hash: "SHA256:5262f12512590031bbcc9a430452bfd75c2791ad6771320bb4b5728bfb78c4d0"
};
```
### R
```
$ jsv data-resource.json --output r
```
**Output**
```r
# The profile of this descriptor.
profile <- "data-resource"
# [example] profile <- "tabular-data-package"
# [example] profile <- "http://example.com/my-profiles-json-schema.json"
# An identifier string. Lower case characters with `.`, `_`, `-` and `/` are allowed.
name <- "my-nice-name"
# A reference to the data for this resource, as either a path as a string, or an array of paths as strings. of valid URIs.
path <- ["file.csv","file2.csv"]
# [example] path <- ["http://example.com/file.csv","http://example.com/file2.csv"]
# [example] path <- "http://example.com/file.csv"
# Inline data for this resource.
data <- NA
# A schema for this resource.
schema <- NA
# A human-readable title.
title <- "My Package Title"
# A text description. Markdown is encouraged.
description <- "# My Package description\nAll about my package."
# The home on the web that is related to this data package.
homepage <- "http://example.com/"
# The raw sources for this resource.
sources <- [{"title":"World Bank and OECD","path":"http://data.worldbank.org/indicator/NY.GDP.MKTP.CD"}]
# The license(s) under which the resource is published.
licenses <- [{"name":"odc-pddl-1.0","path":"http://opendatacommons.org/licenses/pddl/","title":"Open Data Commons Public Domain Dedication and License v1.0"}]
# The file format of this resource.
format <- "xls"
# The media type of this resource. Can be any valid media type listed with [IANA](https://www.iana.org/assignments/media-types/media-types.xhtml).
mediatype <- "text/csv"
# The file encoding of this resource.
encoding <- "utf-8"
# [example] encoding <- "utf-8"
# The size of this resource in bytes.
bytes <- 2082L
# The MD5 hash of this resource. Indicate other hashing algorithms with the {algorithm}:{hash} format.
hash <- "d25c9c77f588f5dc32059d2da1136c02"
# [example] hash <- "SHA256:5262f12512590031bbcc9a430452bfd75c2791ad6771320bb4b5728bfb78c4d0"
```
## Design Principles
The client **should** use Frictionless formats by default for describing dataset and resource objects passed to client methods.
In addition, where more than metadata is needed (e.g., we need to access the data stream, or get the schema) we expect the _Dataset_ and _Resource_ objects to follow the [Frictionless Data Lib pattern](https://github.com/frictionlessdata/project/blob/master/rfcs/0004-frictionless-data-lib-pattern.md).

View File

@ -1,108 +0,0 @@
# CKAN Enterprise
## Introduction
CKAN Enterprise is our name for what we plan would become our standard "base" distribution for CKAN going forward:
* It is a CKAN standard code base with micro-services.
* Enterprise grade data catalog and portal targeted at Gov (open data portals) and Enterprise (Data Catalogs +).
* It is also known as [Datopian DMS](https://www.datopian.com/datopian-dms/).
## Roadmap 2021 and beyond
| | Current | CKAN Enterprise |
|-------------------|--------------------------------------------------------------------------------------------|-----------------------------------------------------------------|
| Raw storage | Filestore | Giftless |
| Data Loader (db) | DataPusher extension | Aircan |
| Data Storage (db) | Postgres | Any database engine. By default, Postgres |
| Data API (read) | Built-in DataStore extension's API including SQL endpoint | GraphQL based standalone micro-service |
| Frontend (public) | Build-in frontend into CKAN Classic python app (some projects are using nodejs app) | PortalJS or nodejs app |
| Data Explorer | ReclineJS (some projects that uses nodejs app for frontend have React based Data Explorer) | GraphQL based Data Explorer |
| Auth | Traditional login/password + extendable with CKAN Classic extensions | SSO with default Google, Github, Facebook and Microsoft options |
| Permissions | CKAN Classic based permissions | Existing permissions exposed via JWT based authz API |
## Timeline 2021
To develop a base distribution of CKAN Enterprise, we want to build a demo project with the features from the roadmap. This way we can:
* understand its advantages/limitations;
* compare against other instances of CKAN;
* demonstrate for the potential clients.
High level overview of the planned features with ETA:
| Name | Description | Effort | ETA |
| ----------------------------- | ------------------------------------ | ------ | --- |
| [Init](#Init) | Select CKAN version and deploy to DX | xs | Q2 |
| [Blobstore](#Blobstore) | Integrate Giftless for raw storage | s | Q2 |
| [Versioning](#Versioning) | Develop/integrate new versioning sys | l | Q3 |
| [DataLoader](#DataLoader) | Develop/integrate Aircan | xl | Q3 |
| [Data API](#Data-API) | Integrate new Data API (read) | m | Q2 |
| [Frontend](#Frontend) | Build a theme using PortalJS | s | Q2 |
| [DataExplorer](#DataExplorer) | Integrate into PortalJS | s | Q2 |
| [Permissions](#Permissions) | Develop permissions in read frontend | m | Q4 |
| [Auth](#Auth) | Integrate | s | Q4 |
### Init
Initialize a new project for development of CKAN Enterprise.
Tasks:
* Boot project in Datopian-DX cluster
* Use CKAN v2.8.x (latest patch) or 2.9.x
* Don't setup DataPusher
* Namespace: `ckan-enterprise`
* Domain: `enterprise.ckan.datopian.com`
### Blobstore
See [blob storage](/docs/dms/blob-storage#ckan-v3)
### Versioning
See [versioning](/docs/dms/versioning#ckan-v3)
### DataLoader
See [DataLoader](/docs/dms/load)
### Data API
* Install new [Data API service](https://github.com/datopian/data-api) in the project
* Install Hasura service in the project
* Set it up to work with DB of CKAN Enterprise
* Read more about Data API [here](/docs/dms/data-api#read-api-3)
Notes:
* We could experiment and use various features of Hasura, eg:
* Setting up row/column limits per user role (permissions)
* Subscriptions to auto load new data rows
### Frontend
PortalJS for the read frontend of CKAN Enterprise. [Read more](/docs/dms/frontend/#frontend).
### DataExplorer
A new Data Explorer based on GraphQL API: https://github.com/datopian/data-explorer-graphql
### Permissions
See [permissions](/docs/dms/permissions#permissions-authorization).
### Auth
Next generation, Kratos based, authentication (mostly SSO with no Traditional login by default) with following options out of the box:
* GitHub
* Google
* Facebook
* Microsoft
Easy to add:
* Discord
* GitLab
* Slack

View File

@ -1,365 +0,0 @@
# CKAN v3
## Introduction
This document describes the architectures of CKAN v2 ("CKAN Classic"), CKAN v3 (also known as "CKAN Next Gen" for Next Generation), and CKAN v3 hybrid. The latter is an intermediate approach towards v3, where we still use CKAN v2 and common extensions, and only create microservices for new features.
You will also find out how to do common tasks such as theming or testing, in each of the architectures.
*Note: this blog post has an overview of the more decoupled, microservices approach at the core of v3: https://www.datopian.com/2021/05/17/a-more-decoupled-ckan/*
## CKAN v2, CKAN v3 and Why v3
In yellow, you see one single Python process:
```mermaid
graph TB
subgraph ckanclassic["CKAN Classic"]
ckancore["Core"]
end
```
When you want to extend core functionality of CKAN v2 (Classic), you write a Python package that must be installed in CKAN. This way, the extension will also run in the same process as the core functionality. This is known as a monolithic architecture.
```mermaid
graph TB
subgraph ckanclassic["CKAN Classic"]
ckancore["Core"] --> ckanext["CKAN Extension 1"]
end
```
When you start to add multiple features, through extensions, what you get is one single Python process running many non-related functionalities.
```mermaid
graph TB
subgraph ckanclassic["CKAN Classic"]
ckancore["Core"] --> ckanext["CKAN Extension 1"]
ckancore --> ckanext2["CKAN Extension 2"]
ckancore --> ckanext3["CKAN Extension 3"]
ckancore --> ckanext4["CKAN Extension 4"]
ckancore --> ckanext5["CKAN Extension 5"]
end
```
This monolithic approach has advantages in terms of simplicity of development and deployment, especially when the system is small. However, as it grows in scale and scope, there are an increasing number of issues.
In this approach, an optional extension has the ability to crash the whole CKAN instance. Every new feature must be written in the same language and framework (e.g. Python, leveraging Flask or Django). And, perhaps most fundamentally, the overall system is highly coupled, making it complex and hard to understand, debug, extend, and evolve.
### Microservices and CKAN v3
The main way to address these problems while gaining extra benefits is to move to a microservices-based architecture.
Thus, we recommend building the next version of CKAN CKAN v3 on a microservices approach.
[!tip]CKAN v3 is sometimes also referred to as CKAN Next Gen(eration).
With microservices, each piece of functionality runs in its own service and process.
```mermaid
graph TB
subgraph ckanapi3["CKAN API 3"]
ckanapi31["API 3"]
end
subgraph ckanapi2["CKAN API 2"]
ckanapi21["API 2"]
end
subgraph ckanapi1["CKAN API 1"]
ckanapi11["API 1"]
end
subgraph ckanfrontend["CKAN frontend"]
ckanfrontend1["Frontend"]
end
ckanfrontend1 --> ckanapi11
ckanfrontend1 --> ckanapi21
ckanfrontend1 --> ckanapi31
```
### Incremental Evolution Hybrid v3
One of the other advantages of the microservices approach is that it can also be used to extend and evolve current CKAN v2 solutions in an incremental way. We term these kinds of solutions "Hybrid v3," as they are a mix of v2 and v3 together.
For example, a Hybrid v3 data portal could use a new microservice written in Node for the frontend, and combine that with CKAN v2 (with v2 extensions).
```mermaid
graph TB
subgraph ckanapi3["CKAN API 3"]
ckanapi31["API 3"]
end
subgraph ckanapi2["CKAN API 2"]
ckanapi21["API 2"]
end
subgraph ckanapi1["CKAN API 1"]
ckanapi11["API 1"]
end
subgraph ckanfrontend["CKAN frontend"]
ckanfrontend1["Frontend"]
end
subgraph ckanclassic["CKAN Classic"]
ckancore["Core"] --> ckanext["CKAN Extension 1"]
ckancore --> ckanext2["CKAN Extension 2"]
end
ckanfrontend1 --> ckancore
ckanfrontend1 --> ckanapi11
ckanfrontend1 --> ckanapi21
ckanfrontend1 --> ckanapi31
```
The hybrid approach means we can evolve CKAN v2 "Classic" to CKAN v3 "Next Gen" incrementally. In particular, it allows people to keep using their existing v2 extensions, and upgrade them to new microservices gradually.
### Comparison of Approaches
| | CKAN v2 (Classic) | CKAN v3 (Next Gen) | CKAN v3 Hybrid |
| ------------ | ------------------| -------------------| ---------------|
| Architecture | Monolithic | Microservice | Microservice with v2 core |
| Language | Python | You can write services in any language you like.<br/><br/>Frontend default: JS.<br/>Backend default: Python | Python and any language you like for microservices. |
| Frontend (and theming) | Python with Python CKAN extension | Flexible. Default is modern JS/NodeJS based | Can use old frontend but default to new JS-based frontend. |
| Data Packages | Add-on, no integration | Default internal and external format | Data Packages with converter to old CKAN format. |
| Extension | Extensions are libraries that are added to core runtime. They must therefore be built in python and are loaded into the core process at build time. "Template/inheritance" model where hooks are in core and it is core that loads and calls plugins. This means that if a hook does not exist in core then the extension is stymied. | Extensions are microservices and can be written in any language. They are loaded into the url space via kubernetes routing manager. Extensions hook into "core" via APIs (rather than in code). Follows a "composition" model rather than inheritance model | Can use old style extensions or microservices. |
| Resource Scaling | You have a single application so scaling is of the core application. | You can scale individual microservices as needed. | Mix of v2 and v3 |
## Why v3: Long Version
What are the problems with CKAN v2's monolithic architecture in relation to microservices v3?
* **Poor Developer Experience (DX), innovability, and scalability due to coupling**. Monolithic means "one big system" => Coupling & Complexity => hard to understand, change and extend. Changes in one area can unexpectedly affect other areas.
* DX to develop a small new API requires wiring into CKAN core via an extension. Extensions can interact in unexpected ways.
* The core of people who fully understand CKAN has stayed small for a reason: there's a lot of understand.
* https://github.com/ckan/ckan/issues/5333 is an example of a small bug that's hard to track down due to various paths involved.
* Harder to make incremental changes due to coupling (e.g. Python 3 upgrade requires *everything* to be fixed at once - can't do rolling releases).
* **Stability**. One bad extension crashes or slows down the whole system
* **One language => Less developer flexibility (Poor DX)**. Have to write *everything* in Python, including the frontend. This is an issue especially for the frontend: almost all modern frontend development is heavily Javascript-based and theme is the #1 thing people want to customize in CKAN. At the moment, that requires installing *all* of CKAN core (using Docker) plus some familiarity with Python and Jinja templating. This is a big ask.
* **Extension stablity and testing**. Testing of extensions is painful (at least without careful factoring in a separate mini library) and are therefore often not tested; they don't have Continuous Integration (CI) or Continuous Deployment (CD). As an example, a highly experienced Python developer at Datopian was still struggling to get extension tests working 6 months into their CKAN work.
* **DX is poor especially when getting started**. Getting CKAN up and running requires multiple external services (database, Solr, Redis, etc.) making Docker the only viable way for bootstraping a local development environment. This makes getting started with CKAN daunting and painful.
* **Vertical scalability is poor**. Scaling the system is costly as you have to replicate the whole core process in every machine.
* **System is highly coupled.** Extensions b/c in process tend to end up with significant coupling to core which makes them brittle (has improved with plugins.toolkit)
* Upgrading core to Python 3 requires upgrading *all* extensions because they run in the same process.
* Search Index is not a separate API, but in Core. So replacing Solr is hard.
The top 2 customizations of CKAN are slow and painful and require deep knowledge of CKAN:
* Theming a site.
* Customizing the metadata.
## Architectures
### CKAN v2 (Classic)
This diagram is based on the file `docker-compose.yml` of [github.com/okfn/docker-ckan](https://github.com/okfn/docker-ckan) (`docker-compose.dev.yml` has the same components, but different configuration).
A difference from this diagram to the file is that we are not including DataPusher, as it is not a required dependency.
>[!tip]Databases may run as Docker containers, or rely on third-party services such as Amazon Relational Database Service (RDS).
```mermaid
graph LR
CKAN[CKAN web app]
CKAN --> DB[(Database)]
CKAN --> Solr[(Solr)]
CKAN --> Redis[(Redis)]
subgraph Docker container
CKAN
end
```
Same setup showing some of the key extensions explicitly:
```mermaid
graph LR
core[CKAN Core] --> DB[(Database)]
datastore --> DB2[(Database - DataStore)]
core --> Solr[(Solr)]
core --> Redis[(Redis)]
subgraph Docker container
core
datastore
datapusher
imageview
...
end
```
CKAN ships with several core extensions that are built-in. Here, together with the list of main components, we list a couple of them:
Name | Type | Repository | Description
-----|------|------------|------------
CKAN | Application (API + Worker) | [Link](https://github.com/ckan/ckan) | Data management system (DMS) for powering data hubs and data portals. It's a monolithical web application that includes several built-in extensions and dependencies, such as a job queue service. In theory, it's possible to run it without any extensions.
datapusher | CKAN Extension | [Link](https://github.com/ckan/ckan/tree/master/ckanext/datapusher) | It could also be called "datapusher-connect." It's a glue code to connect with a separate microservice called DataPusher, which performs actions when new data arrives.
datastore | CKAN Extension | [Link](https://github.com/ckan/ckan/tree/master/ckanext/datastore) | The interface between CKAN and the structure database, the one receiving datasets and resources (CSVs). It includes an API for the database and an administrative UI.
imageview | CKAN Extension | [Link](https://github.com/ckan/ckan/tree/master/ckanext/imageview) | It provides an interface for creating HTML templates for image resources.
multilingual | CKAN Extension | [Link](https://github.com/ckan/ckan/tree/master/ckanext/multilingual) | It provides an interface for translation and localization.
Database | Database | | People tend to use a single PostgreSQL instance for this. Separated in multiple databases, it's the place where CKAN stores its own information (sometimes referred as "MetaStore" and "HubStore"), rows of resources (StructuredStore or DataStore), and raw datasets and resources ("BlobStore" or "FileStore"). The latter may store data in the local filesystem or cloud providers, via extensions.
Solr | Database | | It provides indexing and full-text search for CKAN.
Redis | Database | | Lightweight key-value store, used for caching and job queues.
### CKAN v3 (Next Gen)
CKAN Next Gen is still a DMS, as CKAN Classic; but rather than a monolithical architecture, it follows the microservices approach. CKAN Classic is not a dependency anymore, as we have smaller services providing functionality that we may or many not choose to include. This description is based on [Datopian's Technical Documentation](/docs/dms/ckan-v3/next-gen/#roadmap).
```mermaid
graph LR
subgraph api3["..."]
api31["API"]
end
subgraph api2["Administration"]
api21["API"]
end
subgraph api1["Authentication"]
api11["API"]
end
subgraph frontend["Frontend"]
frontendapi["API"]
end
subgraph storage["Raw Resources Storage"]
storageapi["API"]
end
storageapi --> cloudstorage[(Cloud Storage)]
frontendapi --> storageapi
frontendapi --> api11
frontendapi --> api21
frontendapi --> api31
```
At this moment, many important features are only available through CKAN extensions, so that brings us to the hybrid approach.
### CKAN Hybrid v3 (Next Gen)
We may sometimes make an explit distinction between CKAN v3 "hybrid" and "pure." The reason is because we want to ensure that we're not there yet we have many opportunities to extract features out of CKAN and CKAN Extensions.
In this approach, we still rely on CKAN Classic and all its extensions. Many already had many tests and bugs fixed, so we can deliver more if not forced to rewrite everything from scratch.
```mermaid
graph TB
subgraph ckanapi3["CKAN API 3"]
ckanapi31["API 3"]
end
subgraph ckanapi2["CKAN API 2"]
ckanapi21["API 2"]
end
subgraph ckanapi1["CKAN API 1"]
ckanapi11["API 1"]
end
subgraph ckanfrontend["Frontend"]
ckanfrontend1["Frontend v2"]
theme["[Project-specific theme]"]
end
subgraph ckanclassic["CKAN Classic"]
ckancore["Core"] --> ckanext["CKAN Extension 1"]
ckancore --> ckanext2["[Project-specific extension]"]
end
ckanfrontend1 --> ckancore
ckanfrontend1 --> ckanapi11
ckanfrontend1 --> ckanapi21
ckanfrontend1 --> ckanapi31
```
Name | Type | Repository | Description
-----|------|------------|------------
Frontend v2 | Application | [Link](https://github.com/datopian/frontend-v2) | Node application for Data Portals. It communicates with a CKAN Classic instance, through its API, to get data and render HTML. It is written to be extensible, such as connecting to other applications and theming.
[Project-specific theme] | Frontend Theme | e.g., [Link](https://github.com/datopian/frontend-oddk) | Extension to Frontend v2 where you can personalize the interface, create different pages, and connect with other APIs.
[API 1] | Application | e.g., [Link](https://github.com/datopian/data-subscriptions) | Any application with an API to communicate with the user-facing Frontend v2 or to run tasks in background. Given the current architecture, often, this API is usually designed to work with CKAN interfaces. Over time, we may choose to make it more generic, and even replace CKAN Core with other applications.
## Job Stories
In this spreadsheet, you will find a list of common job stories in CKAN projects. Also, how you can accomplish them in CKAN v2, v3, and Hybrid v3.
https://docs.google.com/spreadsheets/d/1cLK8xylprmVsoQIbdphqz9-ccSpdDABQExvKdvNJqaQ/edit#gid=757361856
## Glossary
### API
An HTTP API, usually following the REST style.
### Application
A Python package, an API, a worker... It may have other applications as dependencies.
### CKAN Extension
A Python package following specification from [CKAN Extending guide](https://docs.ckan.org/en/2.8/extensions/index.html).
### Database
An organized collection of data.
### Dataset
A group of resources made to be distributed together.
### Frontend Theme
A Node project specializing behavior present in [Frontend v2](https://github.com/datopian/frontend-v2).
### Resource
A data blob. Common formats are CSV, JSON, and PDF.
### System
A group of applications and databases that work together to accomplish a set of tasks.
### Worker
An application that runs tasks in background. They may run recurrently according to a given schedule, or as soon as it's requested by another application.
## Appendix
### Architecture - CKAN v2 with DataPusher
```mermaid
graph TB
subgraph DataPusher
datapusherapi["DataPusher API"]
datapusherworker["CKAN Service Provider"]
SQLite[(SQLite)]
end
subgraph CKAN
core
datapusher[datapusher ext]
datastore
...
end
core[CKAN Core] --> datastore
datastore --> DB[(Database)]
datapusherapi --> core
datapusher --> datapusherapi
```
Name | Type | Repository | Description
-----|------|------------|------------
DataPusher | System | [Link](https://github.com/ckan/datapusher) | Microservice that parses data files and uploads them to the datastore.
DataPusher API | API | [Link](https://github.com/ckan/datapusher) | HTTP API written in Flask. It is called from the built-in `datapusher` CKAN extension whenever a resource is created (and has the right type).
CKAN Service Provider | Worker | [Link](https://github.com/ckan/ckan-service-provider) | Library for making web services that make functions available as synchronous or asynchronous jobs.
SQLite | Database | | Unknown use. Possibly a worker dependency.
### Old Next Gen Page
Prior to this page, we had one called "Next Gen." It has intersections with this article, although it focuses more on the benefits of microservices. For the time being, the page still exists in [/ckan-v3/next-gen](/docs/dms/ckan-v3/next-gen), although it may get merged with this one in the future.

View File

@ -1,203 +0,0 @@
# Next Gen
“Next Gen” (NG) is our name for the evolution of CKAN from its current state as “CKAN Classic”.
Next Gen has a decoupled, microservice architecture in contrast to CKAN Classic's monolithic architecture. It is also built from the ground up on the Frictionless Data principles and specifications which provide a simple, well-defined and widely adopted set of core interfaces and tooling for managing data.
## Classic to Next Gen
CKAN classic: monolithic architecture -- everything is one big python application. Extension is done at code level and "compiled in" at compile/run-time (i.e. you end up with one big docker file).
CKAN Next Gen: decoupled, service-oriented -- services connected by network calls. Extension is done by adding new services,
```mermaid
graph LR
subgraph "CKAN Classic"
plugins
end
subgraph "CKAN Next Gen"
microservices
end
plugins --> microservices
```
You can read more about monolithic vs microservice architectures in the [Appendix below](#appendix-monolithic-vs-microservice-architecture).
## Next Gen lays the foundation for the future and brings major immediate benefits
Next Gen's new approach is important in several major ways.
### Microservices are the Future
First, decoupled microservices have become *the* way to design and deploy (web) applications after first being pioneered by the likes of Amazon in the early 2000s. And in the last five to ten years have brought microservices "for the masses" with relevant tooling and technology standardized, open-sourced and widely deployed -- not only with containerization such as Docker, Kubernetes but also in programming languages like (server-side) Javascript and Golang.
By adopting a microservice approach CKAN can reap the the benefits of what is becoming a mature and standard way to design and build (web) applications. This includes the immediate advantages of being aligned with the technical paradigm such as tooling and developer familiarity.
### Microservices bring Scalability, Reliability, Extensibility and Flexibility
In addition, and even more importantly, the microservices approach brings major benefits in:
1. **Scalability**: dramatically easier and cheaper to scale up -- and down -- in size *and* complexity. Size-wise this is because you can replicate individual services rather than the whole application. Complexity-wise this is because monolithic architectures tend to become "big" where service-oriented encourages smaller lightweight components with cleaner interfaces. This means you can have a much smaller core making it easier to install, setup and extend. It also means you can use what you need making solutions easier to maintain and upgrade.
2. **Reliability**: easier (and cheaper) to build highly reliable, high availability solutions because microservices make isolation and replication easier. For example, in a microservice architecture a problem in CKAN's harvester won't impact your main portal because they run in separate containers. Similarly, you can scale the harvester system separately from the web frontend.
3. **Extensibility**: much easier to create and maintain extensions because they are a decoupled service and interfaces are leaner and cleaner.
4. **Flexibility** aka "Bring your own tech": services can be written in any language so, for example, you can write your frontend in javascript and your backend in Python. In a monolithic architecture all parts must be written in the same language because everything is compiled together. This flexibility makes it easier to use the best tool for the job. It also makes it much easier for teams to collaborate and cooperate and fewer bottlenecks in development.
ASIDE: decoupled microservices reflect the "unix" way of building networked applications. As with the "unix way" in general, whilst this approach better -- and simpler -- in the long-run, in the short-run it often needs sustantial foundational work (those Unix authors were legends!). It may also be, at least initially, more resource intensive and more complex infrastructurally. Thus, whilst this approach is "better" it was not suprising that it was initially used for for complex and/or high end applications e.g. Amazon. This also explains why it took a while for this approach to get adoption -- it is only in the last few year that we have robust, lightweight, easy to use tooling and patterns for microservices -- "microservices for the masses" if you like.
In summary, the Next Gen approach provides an essential foundation for the continuing growth and evolution of CKAN as a platform for building world-class data portal and data management solutions.
## Evolution not Revolution: Next Gen Components Work with CKAN Classic
*Gradual evolution from CKAN classic (keep what is working, keep your investments, incremental change)*
Next Gen components are specifically designed to work with CKAN "Classic" in its current form. This means existing CKAN users can immediately benefit from Next Gen components and features whilst retaining the value of their existing investment. New (or existing) CKAN-based solutions can adopt a "hybrid" approach using components from both Classic and Next Gen. It also means that the owner of a CKAN-based solution can incrementally evolve from "Classic" to "Next Gen" by replacing one component one at a time, gaining new functionality without sacrificing existing work.
ASIDE: we're fortunate that CKAN Classic itself was ahead of its time in its level of "service-orientation". From the start, it had a very rich and robust API and it has continued to develop this with almost almost all functionality exposed via the API. It is this rich API and well factored design that makes it relatively straightforward to evolve CKAN in its current "Classic" form towards Next Gen.
## New Features plus Existing Functionality Improved
In addition to its architecture, Next Gen provides a variety of improvements and extensions to CKAN Classic's functionality. For example:
* Theming and Frontend Customization: theming and customizing CKAN's frontend has got radically easier and quicker. See [Frontend section &raquo;][frontend]
* DMS + CMS unified: integrate the full power of a modern CMS into your data portal and have one unified interface for data and content. See [Frontend section &raquo;][frontend]
* Data Explorer: the existing CKAN data preview/explorer has been completely rewritten in modern React-based Javascript (ReclineJS is now 7y old!). See [Data Explorer section &raquo;][explorer]
* Dashboards: build rich data-driven dashboards and integrate. See [Dashboards section &raquo;][dashboards]
* Harvesting: simpler, more powerful harvesting built on modern ETL. See [Harvesting section &raquo;][harvesting]
And each of these features is easily deployed into an existing CKAN solution!
[frontend]: /docs/dms/frontend
[explorer]: /docs/dms/data-explorer
[dashboards]: /docs/dms/dashboards
[harvesting]: /docs/dms/harvesting
## Roadmap
The journey to Next Gen from Classic can proceed step by step -- it does not need to be a big bang. Like refurbishing and extending a house, we can add a room here or renovate a room there whilst continuing to live happily in the building (and benefitting as our new bathroom comes online, or we get a new conservatory!).
Here's an overview of the journey to Next Gen and current implementation status. More granular information on particular features may sometimes be found on the individual feature page, for example for [Harvesting here](/docs/dms/harvesting#design).
```mermaid
graph LR
start[Start]
themefe[Read Frontend]
authfe[Authentication in FE]
authzfe[Authorization in FE]
previews[Previews]
explorer[Explorer]
permsserv[Permissions Service]
orgs[Organizations]
subgraph Start
start
end
subgraph Frontend
start --> themefe
themefe --> authfe
authfe --> authzfe
themefe --> revisioningfe[Revision UI]
end
subgraph Harvesting
start --> harvestetl[Harvesting ETL + Runner]
harvestetl --> harvestui[Harvest UI]
end
subgraph "Admin UI"
managedataset[Manage Dataset]
manageorg[Manage Organization]
manageuser[Manage Users]
manageconfig[Manage Config]
start --> managedataset
start --> manageorg
managedataset --> manageconfig
end
subgraph "Backend (API)"
start --> permsserv
start --> revision[Backend Revisioning]
end
datastore[DataStore]
subgraph DataStore
start --> datastore
datastore --> dataload[Data Load]
end
subgraph Explorer
themefe --> previews
previews --> explorer
end
subgraph Organizations
start --> orgs
end
subgraph Key
done[Done]
nearlydone[Nearly Done]
inprogress[In Progress]
next[Next Up]
end
classDef done fill:#21bf73,stroke:#333,stroke-width:3px;
classDef nearlydone fill:lightgreen,stroke:#333,stroke-width:3px;
classDef inprogress fill:orange,stroke:#333,stroke-width:2px;
classDef next fill:pink,stroke:#333,stroke-width:1px;
class done,themefe,previews,explorer,harvestetl done;
class nearlydone,authfe,harvestui nearlydone;
class inprogress,dataload inprogress;
class next,permsserv next;
```
## Appendix: Monolithic vs Microservice architecture
Monolithic: Libraries or modules communicate via function calls (inside one big application)
Microservices: Services communicate over a network
The best introduction and definition of microservices comes from Martin Fowler https://martinfowler.com/microservices/
> Microservice architectures will use libraries, but their primary way of componentizing their own software is by breaking down into services. We define libraries as components that are linked into a program and called using in-memory function calls, while services are out-of-process components who communicate with a mechanism such as a web service request, or remote procedure call. https://martinfowler.com/articles/microservices.html
### Monolithic
```mermaid
graph TD
subgraph "Monolithic - all inside"
a
b
c
end
a --in-memory function all--> b
a --in-memory function all--> c
```
### Microservice
```mermaid
graph TD
subgraph "A Container"
a
end
subgraph "B Container"
b
end
subgraph "C Container"
c
end
a -.network call.-> b
a -.network call.-> c
```

View File

@ -1,23 +0,0 @@
---
sidebar: auto
---
# CKAN Classic
CKAN (Classic) already has great documentation at: https://docs.ckan.org/
This material is a complement to those docs as well as details of our particular setup. Here, among other things, you'll learn how to:
* [Get Started with CKAN for Development -- install and run CKAN on your local machine](/docs/dms/ckan/getting-started)
* [Play around with a CKAN instance including importing and visualising data](/docs/dms/ckan/play-around)
* [Install Extensions](/docs/dms/ckan/install-extension)
* [Create Your Own Extension](/docs/dms/ckan/create-extension)
* [Client Guide](/docs/dms/ckan-client-guide)
* [FAQ](/docs/dms/ckan/faq)
[start]: /docs/dms/ckan/getting-started
[play]: /docs/dms/ckan/play-around
[CKAN]: https://ckan.org/
[docs]: https://docs.ckan.org/

View File

@ -1,162 +0,0 @@
---
sidebar: auto
---
# Introduction
A CKAN extension is a Python package that modifies or extends CKAN. Each extension contains one or more plugins that must be added to your CKAN config file to activate the extensions features.
## Creating and Installing extensions
1. Boot up your docker compose
```
docker-compose -f docker-compose.dev.yml up
```
2. To create an extension template using this docker composition execute:
```
docker-compose -f docker-compose.dev.yml exec ckan-dev /bin/bash -c "paster --plugin=ckan create -t ckanext ckanext-example_extension -o /srv/app/src_extensions"
```
This command will create an extension template in your local `./src` folder that is mounted inside the containers in the `/srv/app/src_extension` directory. Any extension cloned on the `src` folder will be installed in the CKAN container when booting up Docker Compose (`docker-compose up`). This includes installing any requirements listed in a `requirements.txt` (or `pip-requirements.txt`) file and running `python setup.py develop`.
3. Add the plugin to the `CKAN__PLUGINS` setting in your `.env` file.
```
CKAN__PLUGINS=stats text_view recline_view example_extension
```
4. Restart your docker-compose:
```
# Shut down your instance with crtl+c and then run it again with:
docker-compose -f docker-compose.dev.yml up
```
> [!tip]CKAN will be started running on the paster development server with the '--reload' option to watch changes in the extension files.
You should see the following output in the console:
```
...
ckan-dev_1 | Installed /srv/app/src_extensions/ckanext-example_extension
...
```
## Edit the extension
Let's edit a template to change the way CKAN is displayed to the user!
1. First you will need write permissions to the extension folder since it was created by the user running docker. Replace `your_username` and execute the following command:
> [!tip]You can find out your current username by typing 'echo $USER' in the terminal.
```
sudo chown -R <your_username>:<your_username> src/ckanext-example_extension
```
2. The previous comamand creates all the files and folder structure needed for our extension. Open `src/ckanext-example_extension/ckanext/example_extension/plugin.py` to see the main file of our extension that we will edit to add custom functionality:
```python
import ckan.plugins as plugins
import ckan.plugins.toolkit as toolkit
class Example_ExtensionPlugin(plugins.SingletonPlugin):
plugins.implements(plugins.IConfigurer)
# IConfigurer
def update_config(self, config_):
toolkit.add_template_directory(config_, 'templates')
toolkit.add_public_directory(config_, 'public')
toolkit.add_resource('fanstatic', 'example_theme')
```
3. We will create a custom Flask Blueprint to extend our CKAN instance with more endpoints. In order to create a new blueprint and add an endpoint we need to:
- Import Blueprint and render_template from the flask module.
- Create the functions that will be used as endpoints
- Implement the IBlueprint interface in our plugin and add the new endpoint.
4. From flask import Blueprint and render_template,
```python
import ckan.plugins as plugins
import ckan.plugins.toolkit as toolkit
from flask import Blueprint, render_template
class Example_ExtensionPlugin(plugins.SingletonPlugin):
plugins.implements(plugins.IConfigurer)
# IConfigurer
def update_config(self, config_):
toolkit.add_template_directory(config_, 'templates')
toolkit.add_public_directory(config_, 'public')
toolkit.add_resource('fanstatic', 'example_extension')
```
5. Create a new function: hello_plugin
```python
import ckan.plugins as plugins
import ckan.plugins.toolkit as toolkit
from flask import Blueprint, render_template
def hello_plugin():
u'''A simple view function'''
return u'Hello World, this is served from an extension'
class Example_ExtensionPlugin(plugins.SingletonPlugin):
plugins.implements(plugins.IConfigurer)
# IConfigurer
def update_config(self, config_):
toolkit.add_template_directory(config_, 'templates')
toolkit.add_public_directory(config_, 'public')
toolkit.add_resource('fanstatic', 'example_extension')
```
6. Implement the IBlueprint interface in our plugin and add the new endpoint.
```python
import ckan.plugins as plugins
import ckan.plugins.toolkit as toolkit
from flask import Blueprint, render_template
def hello_plugin():
u'''A simple view function'''
return u'Hello World, this is served from an extension'
class Example_ExtensionPlugin(plugins.SingletonPlugin):
plugins.implements(plugins.IConfigurer)
plugins.implements(plugins.IBlueprint)
# IConfigurer
def update_config(self, config_):
toolkit.add_template_directory(config_, 'templates')
toolkit.add_public_directory(config_, 'public')
toolkit.add_resource('fanstatic', 'example_extension')
# IBlueprint
def get_blueprint(self):
u'''Return a Flask Blueprint object to be registered by the app.'''
# Create Blueprint for plugin
blueprint = Blueprint(self.name, self.__module__)
blueprint.template_folder = u'templates'
# Add plugin url rules to Blueprint object
blueprint.add_url_rule('/hello_plugin', '/hello_plugin', hello_plugin)
return blueprint
```
6. Go back to the browser and navigate to http://ckan:5000/hello_plugin. You should see the value returned by our view!
![New Blueprint output](https://i.imgur.com/AZjTDbN.png)
Now that you have added a new view and endpoint to your plugin you are ready for the next step of the tutorial! You can also check the complete code of this plugin in the [ckan repository](https://github.com/ckan/ckan/tree/master/ckanext/example_flask_iblueprint).

View File

@ -1,110 +0,0 @@
---
sidebar: auto
---
# FAQ
This page provides answers to some frequently asked questions.
## How to create an extension template in my local machine
You can use the `paster` command in the same way as a source install. To create an extension execute the following command:
```
docker-compose -f docker-compose.dev.yml exec ckan-dev /bin/bash -c "paster --plugin=ckan create -t ckanext ckanext-myext -o /srv/app/src_extensions"
```
This will create an extension template inside the container's folder `/srv/app/src_extensions` which is mapped to your local `src/` folder.
Now you can navigate to your local folder `src/` and see the extension created by the previous command and open the project in your favorite IDE.
## How to separate that extension in a new git repository so I can have the independence to install it in other instances
Crucial thing is to understand that extensions get their repositories on GitHub (or elsewhere). You can first create a repository for extension and later clone in `src/` or do opposite as following:
* Create the Extension, for example: `ckanext-myext`.
```
docker-compose -f docker-compose.dev.yml exec ckan-dev /bin/bash -c "paster --plugin=ckan create -t ckanext ckanext-myext -o /srv/app/src_extensions"
```
* Init your new git repository into the extension folder `src/ckanext-myext`
```
cd src/ckanext-myext
git init
```
* Configure remote/origin
```
git remote add origin <remote_repository_url>
```
* Add your files and push the first commit
```
git add .
git commit -m 'Initial Commit'
git push
```
**Note:** The `src/` folder is gitignored in `okfn/docker-ckan` repository, so initializing new git repositories inside is ok.
## How to quickly refresh the changes in my extension into the dockerized environment so I can have quick feedback of my changes
This docker-compose setup for dev environment is already configured so that it sets `debug=True` inside configuration file and auto reloads on python and templates related changes. You do not have to reload when making changes to HTML, javascript or configuration files - you just need to refresh the page in the browser.
See the CKAN images section of the [repository documentation](https://github.com/okfn/docker-ckan#ckan-images) for more detail
## How to run tests for my extension in the dockerized environment so I can have a quick test-development cycle
We write and store unit tests inside the `ckanext/myext/tests` directory. To run unit tests you need to be running the `ckan-dev` service of this docker-compose setup.
* Once running, in another terminal window run the test command:
```
docker-compose -f docker-compose.dev.yml exec ckan-dev nosetests --ckan-dev --nologcapture --reset-db -s -v --with-pylons=/srv/app/src_extensions/ckanext-myext/test.ini /srv/app/src_extensions/ckanext-myext/
```
You can also pass nosetest arguments to debug
```
--ipdb --ipdb-failure
```
**Note:** Right now all tests will be run, it is not possible to choose a specific file or test.
## How to debug my methods in the dockerized environment so I can have a better understanding of whats going on with my logic
To run a container and be able to add a breakpoint with `pdb`, run the `ckan-dev` container with the `--service-ports` option:
```
docker-compose -f docker-compose.dev.yml run --service-ports ckan-dev
```
This will start a new container, displaying the standard output in your terminal. If you add a breakpoint in a source file in the `src` folder (`import pdb; pdb.set_trace()`) you will be able to inspect it in this terminal next time the code is executed.
## How to debug core CKAN code
Currently, this docker-compose setup doesn't allow us to debug core CKAN code since it lives inside the container. However, we can do some hacks so the container uses a local clone of the CKAN core hosted in our machine. To do it:
- Create a new folder called `ckan_src` in this `docker-ckan` folder at the same level of the `src/`
- Clone ckan and checkout the version you want to debug/edit
```
git https://github.com/ckan/ckan/ ckan_src
cd ckan_src
git checkout ckan-2.8.3
```
- Edit `docker-compose.dev.yml` and add an entry to ckan-dev's and ckan-worker-dev's volumes. This will allow the docker container to access the CKAN code hosted in our machine.
```
- ./ckan_src:/srv/app/ckan_src
```
- Create a script in `ckan/docker-entrypoint.d/z_install_ckan.sh` to install CKAN inside the container from the cloned repository (instead of the one installed in the Dockerfile)
```
#!/bin/bash
echo "*********************************************"
echo "overriding with ckan installation with ckan_src"
pip install -e /srv/app/ckan_src
echo "*********************************************"
```
That's it. This will install CKAN inside the container in development mode, from the shared folder. Now you can open the `ckan_src/` folder from your favorite IDE and start working on CKAN.

View File

@ -1,77 +0,0 @@
# CKAN: Getting Started for Development
## Prerequisites
CKAN has a rich tech stack so we have opted to standardize our instructions with Docker Compose, which will help you spin up every service in a few commands.
If you already have Docker-compose, you are ready to go!
If not, please, follow instructions on [how to install docker-compose](https://docs.docker.com/compose/install/).
On Ubuntu you can run:
```
sudo apt-get update
sudo apt-get install docker-compose
```
## Cloning the repo
```
git clone https://github.com/okfn/docker-ckan
# or git clone git@github.com:okfn/docker-ckan.git
cd docker-ckan
```
## Booting CKAN
Create a local environment file:
```
cp .env.example .env
```
Build and Run the instances:
> [!tip]'docker-compose' must be run with 'sudo'. If you want to change this, you can follow the steps below. NOTE: The 'docker' group grants privileges equivalent to the 'root' user.
Create the `docker` group: `sudo groupadd docker`
Add your user to the `docker` group: `sudo usermod -aG docker $USER`
Change the storage directory ownership from `root` to `ckan` by adding the commads below to the `ckan/Dockerfile.dev`
```
RUN mkdir -p /var/lib/ckan/storage/uploads
RUN chown -R ckan:ckan /var/lib/ckan/storage
```
At this point, you can log out and log back in for these changes to apply. You can also use the command `newgrp docker` to temporarily enable the new group for the current terminal session.
```
docker-compose -f docker-compose.dev.yml up --build
```
When you see this log message:
![](https://i.imgur.com/WUIiNRt.png)
You can navigate to `http://localhost:5000`
![CKAN Home Page](https://i.imgur.com/T5LWo8A.png)
and log in with the credentials that docker-compose setup created for you [user: `ckan_admin` password:`test1234`].
>[!tip]To learn key concepts about CKAN, including what it is and how it works, you can read the User Guide.
[CKAN User Guide](https://docs.ckan.org/en/2.8/user-guide.html).
## Next Steps
[Play around with CKAN portal](/docs/dms/ckan/play-around).
## Troubleshooting
Login / Logout button breaks the experience:
- Change the URL from `http://ckan:5000` to `http://localhost:5000`. A complete fix is described in the [Play around with CKAN portal](/docs/dms/ckan/play-around). (Your next step. ;))

View File

@ -1,76 +0,0 @@
---
sidebar: auto
---
# Installing extensions
A CKAN extension is a Python package that modifies or extends CKAN. Each extension contains one or more plugins that must be added to your CKAN config file to activate the extensions features.
In this sections we will teach you only how to install existing extensions. See [next steps](/docs/dms/ckan/create-extension) in case you need to create or modify extensions
## Add new extension
Lets install [Hello World](https://github.com/rclark/ckanext-helloworld) on the portal. For that we need to do 2 thing:
1. Install extension when building docker image
2. Add new extension to CKAN plugins
### Install extension on docker build
For this we need to modify Dockerfile for ckan service. Let's edit it:
```
vi ckan/Dockerfile.dev
# Add following
RUN pip install -e git+https://github.com/rclark/ckanext-helloworld.git#egg=ckanext-helloworld
```
*Note:* In this example we use vi editor, but you can choose any of your choice.
### Add new extension to plugins
We need to modify .env file for that - Search for `CKAN_PLUGINS` and add new extension to the existing list:
```
vi .env
CKAN__PLUGINS=helloworld envvars image_view text_view recline_view datastore datapusher
```
## Check extension is installed
After modifying configuration files you will need to restart the portal. If your CKAN protal is up and running bring it down and re-start
```
docker-compose -f docker-compose.dev.yml stop
docker-compose -f docker-compose.dev.yml up --build
```
### Check what extensions you already have:
http://ckan:5000/api/3/action/status_show
Response should include list of all extensions including `helloworld` in it.
```
"extensions": [
"envvars",
"helloworld",
"image_view",
"text_view",
"recline_view",
"datastore",
"datapusher"
]
```
### Check the extension is actually working
This extension simply adds new route `/hello/world/name` to the base ckan and says hello
http://ckan:5000/hello/world/John-Doe
## Next steps
[Create your own extension](/docs/dms/ckan/create-extension)

View File

@ -1,285 +0,0 @@
---
sidebar: auto
---
# How to play around with CKAN
In this section, we are going to show some basic functionality of CKAN focused on the API.
## Prerequisites
- We assume you've already completed the [Getting Started Guide](/docs/dms/ckan/getting-started).
- You have a basic understanding of Key data portal concepts:
CKAN is a tool for making data portals to manage and publish datasets. You can read about the key concepts such as Datasets and Organizations in the User Guide -- or you can just dive in and play around!
https://docs.ckan.org/en/2.9/user-guide.html
>[!tip]
Install a [JSON formatter plugin for Chrome](https://chrome.google.com/webstore/detail/json-formatter/bcjindcccaagfpapjjmafapmmgkkhgoa?hl=en) or browser of your choice.
If you are familiar with the command line tool `curl`, you can use that.
In this tutorial, we will be using `curl`, but for most of the commands, you can paste a link in your browser. For POST commands, you can use [Postman](https://www.getpostman.com/) or [Google Chrome Plugin](https://chrome.google.com/webstore/detail/postman/fhbjgbiflinjbdggehcddcbncdddomop).
## First steps
>[!tip]
By default the portal is accessible on http://localhost:5000. Let's update your `/etc/hosts` to access it on http://ckan:5000:
```
vi /etc/hosts # You can use the editor of your choice
# add following
127.0.0.1 ckan
```
At this point, you should be able to access the portal on http://ckan:5000.
![CKAN Home Page](https://i.imgur.com/T5LWo8A.png)
Let's add some fixtures to it. For software, a fixture is something used consistently (in this case, data for you to play around with). Run the following from your terminal (do NOT cut the previous docker process as this one depends on the already launched docker, run in another terminal):
```sh
docker-compose -f docker-compose.dev.yml exec ckan-dev ckan seed basic
```
Optionally you can `exec` into a running container using
```sh
docker exec -it [name of container] sh
```
and run the `ckan` command there
```sh
ckan seed basic
```
You should be able to see 2 new datasets on home page:
![CKAN with data](https://i.imgur.com/BiSifyb.png)
To get more details on ckan commands please visit [CKAN Commands Reference](https://docs.ckan.org/en/2.9/maintaining/cli.html#ckan-commands-reference).
### Check CKAN API
This tutorial focuses on the CKAN API as that is central to development work and requires more guidance. We also invite you to explore the user interface which you can do directly yourself by visiting http://ckan:5000/.
#### Let's check the portal status
Go to http://ckan:5000/api/3/action/status_show.
You should see something like this:
```json
{
"help": "https://ckan:5000/api/3/action/help_show?name=status_show",
"success": true,
"result": {
"ckan_version": "2.9.x",
"site_url": "https://ckan:5000",
"site_description": "Testing",
"site_title": "CKAN Demo",
"error_emails_to": null,
"locale_default": "en",
"extensions": [
"envvars",
...
"demo"
]
}
}
```
This means everything is OK: the CKAN portal is up and running, the API is working as expected. In case you see an internal server error, please check the logs in your terminal.
### A Few useful API endpoints to start with
CKAN's Action API is a powerful, RPC-style API that exposes all of CKAN's core features to API clients. All of a CKAN website's core functionality (everything you can do with the web interface and more) can be used by external code that calls the CKAN API.
#### Get a list of all datasets on the portal
http://ckan:5000/api/3/action/package_list
```json
{
"help": "http://ckan:5000/api/3/action/help_show?name=package_list",
"success": true,
"result": ["annakarenina", "warandpeace"]
}
```
#### Search for a dataset
http://ckan:5000/api/3/action/package_search?q=russian
```json
{
"help": "http://ckan:5000/api/3/action/help_show?name=package_search",
"success": true,
"result": {
"count": 2,
...
}
}
```
#### Get dataset details
http://ckan:5000/api/3/action/package_show?id=annakarenina
```json
{
"help": "http://ckan:5000/api/3/action/help_show?name=package_show",
"success": true,
"result": {
"license_title": "Other (Open)",
...
}
}
```
#### Search for a resource
http://ckan:5000/api/3/action/resource_search?query=format:plain%20text
```json
{
"help": "http://ckan:5000/api/3/action/help_show?name=resource_search",
"success": true,
"result": {
"count": 1,
"results": [
{
"mimetype": null,
...
}
]
}
}
```
#### Get resource details
http://ckan:5000/api/3/action/resource_show?id=288455e8-c09c-4360-b73a-8b55378c474a
```json
{
"help": "http://ckan:5000/api/3/action/help_show?name=resource_show",
"success": true,
"result": {
"mimetype": null,
...
}
}
```
*Note:* These are only a few examples. You can find a full list of API actions in the [CKAN API guide](https://docs.ckan.org/en/2.9/api/#action-api-reference).
### Create Organizations, Datasets and Resources
There are 4 steps:
- Get an API key;
- Create an organization;
- Create dataset inside an organization (you can't create a dataset without a parent organization);
- And add resources to the dataset.
#### Get a Sysadmin Key
To create your first dataset, you need an API key.
You can see sysadmin credentials in the file `.env`. By default, they should be
- Username: `ckan_admin`
- Password: `test1234`
1. Navigate to http://ckan:5000/user/login and login.
2. Click on your username (`ckan_admin`) in the upright corner.
3. Scroll down until you see `API Key` on the left side of the screen and copy its value. It should look similar to `c7325sd4-7sj3-543a-90df-kfifsdk335`.
#### Create Organization
You can create an organization from the browser easily, but let's use [CKAN API](https://docs.ckan.org/en/2.9/api/#ckan.logic.action.create.organization_create) to do so.
```sh
curl -X POST http://ckan:5000/api/3/action/organization_create -H "Authorization: 9c04a69d-79f4-4b4b-b4e1-f2ac31ed961c" -d '{
"name": "demo-organization",
"title": "Demo Organization",
"description": "This is my awesome organization"
}'
```
Response:
```json
{
"help": "http://ckan:5000/api/3/action/help_show?name=organization_create",
"success": true,
"result": {"users": [
{
"email_hash":
...
}
]}
}
```
#### Create Dataset
Now, we are ready to create our first dataset.
```sh
curl -X POST http://ckan:5000/api/3/action/package_create -H "Authorization: 9c04a69d-79f4-4b4b-b4e1-f2ac31ed961c" -d '{
"name": "my-first-dataset",
"title": "My First Dataset",
"description": "This is my first dataset!",
"owner_org": "demo-organization"
}'
```
Response:
```json
{
"help": "http://ckan:5000/api/3/action/help_show?name=package_create",
"success": true,
"result": {
"license_title": null,
...
}
}
```
This will create an empty (draft) dataset.
#### Add a resource to it
```sh
curl -X POST http://ckan:5000/api/3/action/resource_create -H "Authorization: 9c04a69d-79f4-4b4b-b4e1-f2ac31ed961c" -d '{
"package_id": "my-first-dataset",
"url": "https://raw.githubusercontent.com/frictionlessdata/test-data/master/files/csv/100kb.csv",
"description": "This is the best resource ever!" ,
"name": "brand-new-resource"
}'
```
Response:
```json
{
"help": "http://ckan:5000/api/3/action/help_show?name=resource_create",
"success": true,
"result": {
"cache_last_updated": null,
...
}
}
```
That's it! Now you should be able to see your dataset on the portal at http://ckan:5000/dataset/my-first-dataset.
## Next steps
* [Install Extensions](/docs/dms/ckan/install-extension).

View File

@ -1,81 +0,0 @@
---
sidebar: auto
---
# Content Management System (CMS) for Data Portals
## Summary
When selecting a CMS solution for Data Portals, we always recommend using headless CMS solution as it provides full flexibility when building your system. Headless CMS means only content (no HTML, CSS, JS) is created in the CMS backend and delivered to Frontend via API.
> The traditional CMS approach to managing content put everything in one big bucket — content, images, HTML, CSS. This made it impossible to reuse the content because it was commingled with code. Read more - https://www.contentful.com/r/knowledgebase/what-is-headless-cms/.
## Features
Core features:
* Create and manage blog posts (or news), e.g., `/news/abcd`
* Create and manage static pages, e.g., `/about`, `/privacy` etc.
Important features:
* User management, e.g., ability to manage editors so that multiple users can edit content.
* User roles, e.g., ability to assign different roles for users so that we can have admins, editors, reviewers.
* Draft content, e.g., useful when working on content development for review/feedback loop. However, this is not essential if you have multiple environments.
* A syntax for writing content with text formatting, multi-level headings, links, images, videos, bullet points. For example, markdown.
* User-friendly interface (text editor) to write content.
```mermaid
graph LR
CMS -.-> Blog["Blog or news section"]
CMS -.-> IndBlog["Individual blog post"]
CMS -.-> About["About page content"]
CMS -.-> TC["Terms and conditions page content"]
CMS -.-> Privacy["Privacy policy"]
CMS -.-> Other["Other static pages"]
```
## Options
Headless CMS options:
* WordPress (headless option)
* Drupal (headless option)
* TinaCMS - https://tina.io/
* Git-based CMS - custom soltion based on Git repository.
* Strapi - https://docs.strapi.io/developer-docs/latest/getting-started/introduction.html
* Ghost - https://ghost.org/docs/
* CKAN Pages (built-in CMS option) - https://github.com/ckan/ckanext-pages
*Note, there are loads of CMS available both in open-source and proprietary software. We are only considering few of them in this article and our requirement is that we should be able to fetch content via API (headless CMS). Readers are welcome to add more options into the list.*
Comparison criteria:
* Self-hosting (note this isn't criteria for most of projects and using managed hosting is a better option sometimes)
* Free and open source
* Multi language posts (unnecessary if your portal is single language)
Comparison:
| Options | Hosting | Free | Multi language |
| -------- | -------- | -------- | -------------- |
| Drupal | Tedious | Yes | Not straigtforward|
| WordPress| Tedious | Yes | Terrible UX |
| TinaCMS | Medium | Yes | Limited |
| Git-based| Easy | Yes | Custom |
| Strapi | Medium | Yes | Simple |
| Ghost | Medium | Yes | Simple |
| CKAN Pages| Easy | Yes | ? |
## Conclusion and recommendation
Final decision should be made based on the following items:
* How often editors will create content? E.g., daily, weekly, monthly, occasionally.
* How much content you already have and need to migrate?
* How many content editors you are planning to have? What are their technical expertise?
* Is there any specific requirements, e.g., you must host in your cloud?
By default, we would recommend considering options such as Strapi, TinaCMS and Git-based CMS. We can even start with simple CKAN's built-in Pages and only move to sophisticated CMS once it is required.

View File

@ -1,163 +0,0 @@
# Dashboards
## What you can do?
* Describe vizualizations in JSON and create interactive widgets
* Customize dashboard layout using well-known HTML
* Style dashboard design with TailwindCSS utility classes
* Rapidly create basic charts using "simple" graphing specification
* Create advanced widgets by utilizing "vega" visualization grammar
## How?
To create a dashboard you need to have some basic knowledge of:
* git
* JSON
* HTML
Before proceeding further, make sure you have forked the dashboards repository - https://github.com/datopian/dashboards.
### Create a directory for your dashboard
In the root of the project, create a directory for your dashboard. Name of this directory is the name of your dashboard so make it short and meaningful. Here is some good examples:
* population
* environment
* housing
So that your dashboard will be available at https://domain.com/dashboards/your-dashboard-name.
Note that your dashboard directory will contain 2 files:
* `index.html` - [HTML template](#Set-up-your-layout)
* `config.json` - [configurations for widgets](#Configure-vizualizations)
### Set up your layout
You need to prepare HTML template for your dashboard. No need to create entire HTML page but only snippet that is needed to inject the widgets:
```html
<h1>My example dashboard</h1>
<div id="widget1"></div>
<div id="widget2"></div>
```
In the example above, we've created 2 div elements that we can reference by id when configuring vizualizations.
Note that you can add any HTML tags and make your layout stand out. In the next section we'll explain how you do some stylings.
### Style it
This step is optional but if you have a dashboard with lots of widgets and metadata, you might want to style it so it appears nicely:
* Use TailwindCSS utility classes **(recommended)**
* Official docs - https://tailwindcss.com/
* Cheat sheet - https://nerdcave.com/tailwind-cheat-sheet
* Add inline CSS
Example of using TailwindCSS utility classes:
```html
<h1 class="text-gray-700 text-lg">My example dashboard</h1>
<div class="inline-block bg-gray-200 m-10" id="widget1"></div>
<div class="inline-block bg-gray-200 m-10" id="widget2"></div>
```
### Configure vizualizations
In your config file `config.json` you can describe your dashboard in the following way:
```json
{
"widgets": [],
"datasets": []
}
```
* `widgets` - a list of objects where each object contains information about where a widget should be injected and how it should look like (see below for examples).
* `datasets` - a list of dataset URLs.
Example of a minimal widget object:
```json
{
"elementId": "widget1",
"view": {
"resources": [
{
"datasetId": "",
"name": ""
}
],
"specType": "",
"spec": {}
}
}
```
where:
* `elementId` - is "id" of the HTML tag you want to use as a container of your widget. See [how we defined it here](#Set-up-your-layout).
* `view` - descriptor of a vizualization (widget).
* `resources` - a list of resources needed for a widget and required manipulations (transformations).
* `datasetId` - the id (name) of the dataset from which the resource is extracted.
* `name` - name of the resource.
* `transform` - transformations required for a resource (optional). If you want to learn more about transforms:
* Filtering data and applying formula: https://datahub.io/examples/transform-examples-on-co2-fossil-global#readme
* Sampling: https://datahub.io/examples/example-sample-transform-on-currency-codes#readme
* Aggregating data: https://datahub.io/examples/transform-example-gdp-uk#readme
* `specType` - type of a widget, e.g., `simple`, `vega` or `figure`.
* `spec` - specification for selected widget type. See below for examples.
* `title`, `legend`, `footer` - these are optional metadata for a widget. All must be a string.
#### Basic charts
Simple graph spec is the easiest and quickest way to specify a vizualization. Using simple graph spec you can generate line and bar charts:
https://frictionlessdata.io/specs/views/#simple-graph-spec
#### Advanced vizualizations
Please check this instructions to create advanced graphs via Vega specification:
https://frictionlessdata.io/specs/views/#vega-spec
#### Figure widget
The figure widget is used to display a single value from a dataset. For example, you might want to show latest unemployment rate in your dashboard so that it indicates current status of your cities economy. See left-hand side widgets here - https://london.datahub.io/.
A specification for the figure widget would have the following structure:
```
{
"fieldName": "",
"suffix": "",
"prefix": ""
}
```
where "fieldName" attribute will be used to extract specific value from a row. The "suffix" and "prefix" attributes are optional strings that is used to surround a figure, e.g., you can prepend a percent sign to indicate the number's value.
Note that the first row of the data is used which means you need to transform data to show the relevant value. See this example for details - https://github.com/datopian/dashboard-js/blob/master/example/script.js#L12-L22.
#### Example
Check out carbon emission per capita dashboard as an example of creating advanced vizualizations:
https://github.com/datopian/dashboards/tree/master/co2-emission-by-nation
## Share it with the world!
To make your dashboard live on the data portal, you need to:
1. Simply create a pull request
2. Wait until your work gets reviewed and merged into "master" branch.
3. Implement any requested changes in your work.
4. Done! Your dashboard is now available at https://domain.com/dashboards/your-dashboard-name
## Research
* http://dashing.io/ - no longer maintained as of 2016
* Replaced by https://smashing.github.io/

View File

@ -1,358 +0,0 @@
# HDX Technical Architecture for Quick Dashboards
Notes from analysis and discussion in 2018.
# Concepts
* Bite (View): a description of an individual chart / map / fact and its data (source)
* bite (for Simon): title, desc, data (compiled), uniqueid, map join info
* view (Data Package views): title, desc, data sources (on parent data package), transforms, ...
* compiled view: title, desc, data (compiled)
* Data source:
* Single HXL file (Currently, Simon's approach requires that all the data is in a single table so there is always a single data source.)
* Data Package(s)
* Creator / Editor: creating and editing the dashboard (given the source datasets)
* Renderer: given dashboard config render the dashboard
# Dashboard Creator
```mermaid
graph LR
datahxl[data+hxl]
layouts[Layout options]
dashboard["Dashboard (config)<br/><br/>(Layout, Data Sources, Selected Bites)"]
editor[Editor]
bites[Bites<br /><em>potential charts, maps etc</em>]
datahxl --suggester--> bites
bites --> editor
layouts --> editor
editor --save--> dashboard
```
## Bite generation
```mermaid
graph LR
data[data with hxl] --> inferbites(("Iterate Recipes<br/>and see what<br/>matches"))
inferbites --> possmatches[List of potential bites]
possmatches --no map info--> done[Bite finished]
possmatches --lat+lon--> done
possmatches --geo info--> maplink(("Check pcodes<br/> and link<br/>map server url"))
maplink -.-> fuzzy((Fuzzy Matcher))
fuzzy --> done
maplink --> done
maplink --error--> nobite[No Bite]
```
## Extending to non-HXL data
It is easy to extend this to non-HXL data by using base HXL types and inference e.g.
```
date => #date
geo => #geo+lon
geo => #geo+lat
string/category => #indicator
```
```mermaid
graph LR
data[data + syntax]
datahxl[data+hxl]
layouts[layout options]
dashboard["Dashboard (config)"]
editor[Editor]
bites[Bites<br /><em>potential charts, maps etc</em>]
data --infer--> datahxl
datahxl --suggester--> bites
bites --> editor
layouts --> editor
editor --save--> dashboard
```
# Dashboard Renderer
Rendering the dashboard involves:
```mermaid
graph LR
bites[Compiled Bites/Views]
renderer["Renderer<br/>(Layout + charting / mapping libs)"]
data[Data]
subgraph Dashboard Config
bitesconf[Bites/Views Config]
layoutconf[Layout Config]
end
bitecompiler[Bite/View Compiler]
bitecompiler --> bites
bitesconf --> bitecompiler
data --> bitecompiler
layoutconf --> renderer
bites --> renderer
renderer --> dashboard[HTML Dashboard]
```
## Compiled View generation
See https://docs.datahub.io/developers/views/
----
# Architecture Proposal
* data loader library
* File: rows, fields (rows, columns)
* type inference (?)
* syntax: table schema infer
* semantics (not now)
* data transform library (include hxl support)
* suggester library
* renderer library
Interfaces / Objects
* File
* (Dataset)
* Transform
* Algorithm / Recipe
* Bite / View
* Ordered Set of Bites
* Dashboard
## File (and Dataset)
http://okfnlabs.org/blog/2018/02/15/design-pattern-for-a-core-data-library.html
https://github.com/datahq/data.js
File
rows
descriptor
schema
schema
## Recipe
```json=
{
'id':'chart0001',
'type':'chart',
'subType':'row',
'ingredients':[{'name':'what','tags':['#activity-code-id','#sector']}],
'criteria':['what > 4', 'what < 11'],
'variables': ['what', 'count()'],
'chart':'',
'title':'Count of {1}',
'priority': 8,
}
```
## Bite / Compiled View
```json=
{
bite: array [...data for chart...],
id: string "...chart bite ID...",
priority: number,
subtype: string "...bite subtype - row, pie...",
title: string "...title of bite...",
type: string "...bite type...",
uniqueID: string "...unique ID combining bite and data structure",
}
```
=>
## Dashboard
```json=
{
"title":"",
"subtext":"",
"filtersOn":true,
"filters":[],
"headlinefigures":0,
"headlinefigurecharts":[
],
"grid":"grid5",
"charts":[
{
"data":"https://proxy.hxlstandard.org/data.json?filter01=append&append-dataset01-01=https%3A%2F%2Fdocs.google.com%2Fspreadsheets%2Fd%2F1FLLwP6nxERjo1xLygV7dn7DVQwQf0_5tIdzrX31HjBA%2Fedit%23gid%3D0&filter02=select&select-query02-01=%23status%3DFunctional&url=https%3A%2F%2Fdocs.google.com%2Fspreadsheets%2Fd%2F1R9zfMTk7SQB8VoEp4XK0xAWtlsQcHgEvYiswZsj9YA4%2Fedit%23gid%3D0",
"chartID":""
},
{
"data":"https://proxy.hxlstandard.org/data.json?filter01=append&append-dataset01-01=https%3A%2F%2Fdocs.google.com%2Fspreadsheets%2Fd%2F1FLLwP6nxERjo1xLygV7dn7DVQwQf0_5tIdzrX31HjBA%2Fedit%23gid%3D0&filter02=select&select-query02-01=%23status%3DFunctional&url=https%3A%2F%2Fdocs.google.com%2Fspreadsheets%2Fd%2F1R9zfMTk7SQB8VoEp4XK0xAWtlsQcHgEvYiswZsj9YA4%2Fedit%23gid%3D0",
"chartID":""
}
]
}
```
```
var config = {
layout: 2x2 // in city-indicators dashboard is handcrafted in layout
widgets: [
{
elementId / data-id: ...
view: {
metadata: { title, sources: "World Bank"}
resources: rule for creating compiled list of resources. [ { datasetId: ..., resourceId: ..., transform: ...} ]
specType:
viewspec:
}
},
{
},
]
datasets: [
list of data package urls ...
]
}
```
Simon's example
https://simonbjohnson.github.io/hdx-iom-dtm/
```javascript=
{
// metadata for dashboard
"title":"IOM DTM Example",
"subtext":" ....",
"headlinefigures": 3,
"grid": "grid5", // user chosen layout for dashboard. Choice of 10 grids
"headlinefigurecharts": [ //widgets - headline widget
{
"data": "https://beta.proxy.hxlstandard.org/data/1d0a79/download/africa-dtm-baseline-assessments-topline.csv",
"chartID": "text0013/#country+name/1" // bite Id
// elementId: ... // implicit from order in grid ...
},
{
"data": "https://beta.proxy.hxlstandard.org/data/1d0a79/download/africa-dtm-baseline-assessments-topline.csv",
"chartID": "text0012/#affected+hh+idps/5"
},
{
"data": "https://beta.proxy.hxlstandard.org/data/1d0a79/download/africa-dtm-baseline-assessments-topline.csv",
"chartID":"text0012/#affected+idps+ind/6"
}
],
"charts": [ // chart widgets
{
"data": "https://beta.proxy.hxlstandard.org/data/1d0a79/download/africa-dtm-baseline-assessments-topline.csv",
"chartID": "map0002/#adm1+code/4/#affected+idps+ind/6",
"scale":"log" // chart config ...
},
{
"data": "https://beta.proxy.hxlstandard.org/data/1d0a79/download/africa-dtm-baseline-assessments-topline.csv",
"chartID": "chart0009/#country+name/1/#affected+idps+ind/6",
"sort":"descending"
}
]
}
```
Algorithm
1. Extract the data references to a common list of datasets and fetch them
2. You generate compiled data via hxl.js plus own code transforming to final data for charting etc
```
function transformChart(rawSourceData (csv parsed), bite) => [ [ ...], [...]] - data for chart
hxl.js
custom code
function transformMap
function transformText ...
```
https://github.com/SimonbJohnson/hxlbites.js
https://github.com/SimonbJohnson/hxlbites.js/blob/master/hxlBites.js#L957
```
hb.reverse(bite) => compiled bite (see above) (data, chartConfig)
```
3. generate dashboard html and compute element ids in actual page element ids computed from grid setup
4. Now have a final dashboard config
```
widgets: [
{
data: [ [...], [...]]
widgetType: text, chart, map ...
elementId: // element to bind to ...
}
]
```
5. Now use specific renderer libraries e.g. leaflet, plotly/chartist etc to render out into page
https://github.com/SimonbJohnson/hxldash/blob/master/js/site.js#L294
### Notes
"Source" version of dashboard with data uncompiled.
Compiled version of dashboard with final data inline ...
hxl.js takes an array of arrays ... and outputs array of arrays ...
```
{
schema: [...]
data: [...]
}
```
# Renderer
* Renderer for the dashboard
* Renderer for each widget
```
function createChart(bite, elementId) => svg in bite
```
## Charts
* Data Package View => svg/png etc
* plotly
* vega (d3)
* https://github.com/frictionlessdata/datapackage-render-js
* chartist
* react-charts
## Map
* Leaflet
* react-leaflet
## Tables
...

View File

@ -1,270 +0,0 @@
# Data APIs (and the DataStore)
## Introduction
A Data API provides *API* access to data stored in a [DMS][]. APIs provide granular, per record access to datasets and their component data files. They offer rich querying functionality to select the records you want, and, potentially, other functionality such as aggregation. Data APIs can also provide write access, though this has traditionally been rarer.[^rarer]
Furthermore, much of the richer functionality of a DMS or Data Portal such as data visualization and exploration require API data access rather than bulk download.
[DMS]: /docs/dms/dms
[^rarer]: It is rarer because write access usually means a) the data for this dataset is a structured database rather than a data file (which is normally more expensive both in terms b) the Data Portal has now become the primary (or semi-primary) home of this dataset rather simply being the host of a dataset whose home and maintenance is elsewhere.
### API vs Bulk Access
Direct download of a whole data file is the default method of access for data in a DMS. API access complements this direct download in "bulk" approach. In some situations API access may be the primary access option (so-called "API first"). In other cases, structured storage and API read/write may be the *only* way the data is stored and there is no bulk storage -- for example, this would be a natural approach for time series data which is being rapidly updated e.g. every minute.
*Fig 1: Contrasting Download and API based access*
```bash
# simple direct file access. You download
https://my-data-portal.org/my-dataset/my-csv-file.csv
# API based access. Find the first 5 records with 'awesome'
https://my-data-portal.org/data-api/my-dataset/my-csv-file-identifier?q=awesome&limit=5
```
In addition, to differing volume of access, APIs often differ from bulk download in their data format: following web conventions data APIs usually return the data in a standard format such as JSON (and can also provide various other formats e.g. XML). By contrast, direct data access necessarily supplies the data in whatever data format it was created in.
### Limitations of APIs
Whilst Data APIs are in many ways more flexible than direct download they have disadvantages:
* APIs are much more costly and complex to create and maintain than direct download
* API queries are slow and limited in size because they run in real-time in memory. Thus, for bulk access e.g. of the entire dataset direct download is much faster and more efficient (download a 1GB CSV directly is easy and takes seconds but attempting to do so via the API may crash the server and be very slow).
{/*
TODO: do more to compare and contrast download vs API access (e.g. what each is good for, formats, etc)
*/}
### Why Data APIs?
Data APIs underpin the following valuable functionality on the "read" side:
* **Data (pre)viewing**: reliably and richly (e.g. with querying, mapping etc). This makes the data much more accessible to non-technical users.
* **Visualization and analytics**: rich visualization and analytics may need a data API (because they need easily to query and aggregate parts of dataset).
* **Rich Data Exploration**: when exploring the data you will want to explore through a dataset quickly only pulling parts of the data and drilling down further as needed.
* **(Thin) Client applications**: with a data API third party users of the portal can build apps on top of the portal data easily and quickly (and without having to host the data themselves)
Corresponding job stories would be like:
* When building a visualization I want to select only some part of a dataset that I need for my visualization so that I can load the data quickly and efficiently.
* When building a Data Explorer or Data Driven app I want to slice/dice/aggregate my data (without downloading it myself) so that I can display that in my explorer / app.
On the write side they provide support for:
* **Rapidly updating data e.g. timeseries**: if you are updating a dataset every minute or every second you want an append operation and don't want to store the whole file every update just to add a single record
* **Datasets stored as structured data by default** and which can therefore be updated in part, a few records at a time, rather than all at once (as with blob storage)
## Domain Model
The functionality associated to the Data APIs can be divided in 6 areas:
* **Descriptor**: metadata describing and specifying the API e.g. general metadata e.g. name, title, description, schema, and permissions
* **Manager** for creating and editing APIs.
* API: for creating and editing Data API's descriptors (which triggers creation of storage and service endpoint)
* UI: for doing this manually
* **Service** (read): web API for accessing structured data (i.e. per record) with querying etc. *When we simply say "Data API" this is usually what we are talking about*
* Custom API & Complex functions: e.g. aggregations, join
* Tracking & Analytics: rate-limiting etc
* Write API: usually secondary because of its limited performance vs bulk loading
* Bulk export of query results especially large ones (or even export of the whole dataset in the case where the data is stored directly in the DataStore rather than the FileStore). This is an increasingly important featurea lower priority but if required it is substantive feature to implement.
* **Data Loader**: bulk loading data into the system that powers the data API. **This is covered in a [separate Data Load page](/docs/dms/load/).**
* Bulk Load: bulk import of individual data files
* Maybe includes some ETL => this takes us more into data factory
* **Storage (Structured)**: the underlying structured store for the data (and its layout). For example, Postgres and its table structure.This could be considered a separate component that the Data API uses or as part of the Data API -- in some cases the store and API are completely wrapped together, e.g. ElasticSearch is both a store and a rich Web API.
>[!tip]Visualization is not part of the API but the demands of visualization are important in designing the system.
## Job Stories
### Read API
When I'm building a client application or extracting data I want to get data quickly and reliably via an API so that I can focus on building the app rather than manging the data
* Performance: Querying data is **quick**
* Filtering: I want to filter data easily so that I can get the slice of data that I need.
* ❗ unlimited query size for downloading eg, can download filtered data with millions of rows
* can get results in 3 formats: CSV, JSON and Excel.
* API formats
* "Restful" API (?)
* SQL API (?)
* ❗ GraphQL API (?)
* ❗ custom views/cubes (including pivoting)
* Query UI
:exclamation: = something not present atm
#### Retrieve records via an API with filtering (per resource) (if tabular?)
When I am building a web app, a rich viz, display the data, etc I want to have an API to data (returns e.g. JSON, CSV) [in a resource] so that I can get precise chunks of data to use without having to download and store the whole dataset myself
* I want examples
* I want a playground interface …
#### Bulk Export
When I have a query with a large amount of results I want to be able to download all of those results so that I can analyse them with my own tools
#### Multiple Formats
When querying data via the API I want to be able to get the results in different formats (e.g. JSON, CSV, XML (?), ...) so that I can get it in a format most suitable for my client application or tool
#### Aggregate data (perform ops) via an API …
When querying data to use in a client application I want to be able to perform aggregations such as sum, group by etc so that I can get back summary data directly and efficiently (and don't have to compute myself or wait for large amounts of data)
#### SQL API
When querying the API as a Power User I want to use SQL so that I can do complex queries and operations and reuse my exisitng SQL knowledge
#### GeoData API
When querying a dataset with geo attributes such as location I want to be able use geo-oriented functionality e.g. find all items near X so that I can find the records I want by location
#### Free Text Query (Google Style / ElasticSearch Style)
When querying I want to do a google style search in data e.g. query for "brown" and find all rows with brown in them or do `brown road_name:*jfk*` and get all results with brown in them and whose field `road_name` has `jfk` in it so that I can provide a flexible query interface to my users
#### Custom Data API
As a Data Curator I want to create a custom API for one or more resources so that users can access my data in convenient ways …
* E.g. query by dataset or resource name rather than id ...
#### Search through all data (that is searchable) / Get Summary Info
As a Consumer I want to search across all the data in the Data Portal at once so that I can find the value I want quickly and easily … (??)
#### Search for variables used in datasets
As a Consumer (researcher/student …) I want to look for datasets with particular variables in them so that I can quickly locate the data I want for my work
* Search across the column names so that ??
#### Track Usage of my Data API
As a DataSet Owner I want to know how much my Data API is being used so that I can report that to stakeholders / be proud of that
#### Limit Usage of my Data API (and/or charge for it)
As a Sysadmin I want to limit usage of my Data API per user (and maybe charge for above a certain level) so that I dont spend too much money
#### Restrict Access to my Data API
As a Publisher I want to only allow specific people to access data via the data API so that …
* Want this to mirror the same restrictions I have on the dataset / resources elsewhere (?)
### UI for Exploring Data
>[!warning]This probably is not a Data API epic -- rather it would come under the Data Explorer.
* I want an interface to “sql style” query data
* I want a filter interface into data
* I want to download filtered data
* ...
### Write API
When adding data I want to write new rows via the data API so that the new data is available via the API
* ? do we also want a way to do bulk additions?
### DataStore
When creating a Data API I want a structured data store (e.g. relational database) so that I can power the Data API and have it be fast, efficient and reliable.
## CKAN v2
In CKAN 2 the bulk of this functionality is in the core extension `ckanext-datastore`:
* https://docs.ckan.org/en/2.8/maintaining/datastore.html
* https://github.com/ckan/ckan/tree/master/ckanext/datastore
In summary: the underlying storage is provided by a Postgres database. A dataset resource is mapped to a table in Postgres. There are no relations between tables (no foreign keys). A read and write API is provided by a thin Python wrapper around Postgres. Bulk data loading is provided in separate extensions.
### Implementing the 4 Components
Here's how CKAN 2 implements the four components described above:
* Read API: is provided by an API wrapper around Postgres. This is written as a CKAN extension written in Python and runs in process in the CKAN instance.
* Offers both classic Web API query and SQL queries.
* Full text, cross field search is provided via Postgres and creating an index concatenating across fields.
* Also includes a write API and functions to create tables
* DataStore: a dedicated Postgres database (separate to the main CKAN database) with one table per resource.
* Data Load: provided by either DataPusher (default) or XLoader. More details below.
* Utilize the CKAN jobs system to load data out of band
* Some reporting integrated into UI
* Supports tabular data (CSV or Excel) : this converts CSV or Excel into data that can be loaded into the Postgres DB.
* Bulk Export: you can bulk download via the extension using the dump functionality https://docs.ckan.org/en/2.8/maintaining/datastore.html#downloading-resources
* Note however this will have problems with large resources either timing out or hanging the server
### Read API
The CKAN DataStore extension provides an ad-hoc database for storage of structured data from CKAN resources.
See the DataStore extension: https://github.com/ckan/ckan/tree/master/ckanext/datastore
[Datastore API](https://docs.ckan.org/en/2.8/maintaining/datastore.html#the-datastore-api)
[Making Datastore API requests](https://docs.ckan.org/en/2.8/maintaining/datastore.html#making-a-datastore-api-request)
[Example: Create a DataStore table](https://docs.ckan.org/en/2.8/maintaining/datastore.html#ckanext.datastore.logic.action.datastore_create)
```sh
curl -X POST http://127.0.0.1:5000/api/3/action/datastore_create \
-H "Authorization: {YOUR-API-KEY}" \
-d '{ "resource": {"package_id": "{PACKAGE-ID}"}, "fields": [ {"id": "a"}, {"id": "b"} ] }'
```
### Data Load
See [Load page](/docs/dms/load#ckan-v2).
### DataStore
Implemented as a separate Postgres Database.
https://docs.ckan.org/en/2.8/maintaining/datastore.html#setting-up-the-datastore
### What Issues are there?
Sharp Edges
* connection between MetaStore (main CKAN objects DB) and DataStore is not always well maintained e.g, if I call “purge_dataset” action, it will remove stuff from MetaStore but it wont delete a table from DataStore. This does not break UX but your DataStore DB raises in size and you might have junk tables with lots of data.
DataStore (Data API)
* One table per resource and no way to join across resources
* Indexes are auto-created and no way to customize per resource. This can lead to issues on loading large datasets.
* No API gateway (i.e. no way to control DDOSing, to do rate limiting etc)
* SQL queries not working (with private datasets)
## CKAN v3
Following the general [next gen microservices approach][ng], the Data API is separated into distinct microservices.
[ng]: /docs/dms/ckan-v3/next-gen
### Read API
Approach: Refactor current DataStore API into a standalone microservice. Key point would be to break out permissioning. Either via a call out to separate permissioning service or a simple JWT approach where capability is baked in.
Status: In Progress (RFC) - see https://github.com/datopian/data-api
### Data Load
Implemented via AirCan. See [Load page](/docs/dms/load).
### Storage
Back onto Postgres by default just like CKAN 2. May also explore using other backends esp from Cloud Providers e.g. BigQuery or AWS RedShift etc.
* See Data API service https://github.com/datopian/data-api
* BigQuery: https://github.com/datopian/ckanext-datastore-bigquery

View File

@ -1,282 +0,0 @@
---
sidebar: auto
---
# Data Explorer
The Datopian Data Explorer is a React single page application and framework for creating and displaying rich data explorers (think Tableau-lite). Use stand-alone or with CKAN. For CKAN it is a drop-in replacement for ReclineJS in CKAN Classic.
![Data Explorer](/static/img/docs/dms/data-explorer/data-explorer.png)
> [Data Explorer for the City of Montreal](http://montreal.ckan.io/ville-de-montreal/geobase-double#resource-G%C3%83%C2%A9obase%20double)
## Features / Highlights
"Data Explorer" is an embedable React/Redux application that allows users to:
* Explore tabular, map, PDF, and other types of data
* Create map views of tabular data using the [Map Builder](#map-builder)
* Create charts and graphs of tabular data using [Chart Builder](#chart-builder)
* Easily build SQL queries for Data Store API using graphical interface of [Datastore Query Builder](#datastore-query-builder)
## Components
The Data Explorer application acts as a coordinating layer and state management solution -- via [Redux](https://redux.js.org/) -- for several libraries, also maintained by Datopian.
### [Datapackage Views](https://github.com/datopian/datapackage-views-js)
![Datapackage Views](/static/img/docs/dms/data-explorer/datapackage-views.png)
Datapackage View is the rendering engine for the main window of the Data Explorer.
The above image displays a table shown at the `Table` tab, but note that Datapackage-views renders _all_ data visualizations: Tables, Charts, Maps, and others.
### [Datastore Query Builder](https://github.com/datopian/datastore-query-builder)
<img alt="Datastore Query Builder" src="/static/img/docs/dms/data-explorer/query-builder.png" width="250px" />
The Datastore Query Builder interfaces with the Datastore API to allow users to search data resources using an SQL like interface. See the docs for this module here - [Datastore Query Builder docs](/docs/dms/data-explorer/datastore-query-builder/).
### [Map Builder](https://github.com/datopian/map-builder)
<img alt="Map Builder" src="/static/img/docs/dms/data-explorer/map-builder.png" width="250px" />
Map Builder allows users to build maps based on geo-data contained in tabular resources.
Supported geo formats:
* lon / lat (separate columns)
### [Chart Builder](https://github.com/datopian/chart-builder)
<img alt="Chart Builder" src="/static/img/docs/dms/data-explorer/chart-builder.png" width="250px" />
Chart Builder allows users to create charts and graphs from tabular data.
## Quick-start (Sandbox)
* Clone the data explorer
```bash
$ git clone git@gitlab.com:datopian/data-explorer.git
```
* Use yarn to install the project dependencies
```bash
$ cd data-explorer
$ yarn
```
* To see the Data Explorer running in a sandbox environment run [Cosmos](https://github.com/react-cosmos/react-cosmos)
```bash
$ yarn cosmos
```
## Configuration
[`data-datapackage` attribute](#add-data-explorer-tags-to-the-page-markup) may influence how the element will be displayed. It can be created from a [datapackage descriptor](https://frictionlessdata.io/specs/data-package/).
### Fixtures
Until we have better documentation on Data Explorer settings, use the [Cosmos fixtures](https://gitlab.com/datopian/data-explorer/blob/master/__fixtures__/with_widgets/geojson_simple.js) as an example of how to instantiate / configure the Data Explorer.
### Serialized state
`store->serializedState` is a representation of the application state _without fetched data_
A data-explorer can be "hydrated" using the serialized state, it will refetch the data, and will render in the same state it was exported in
### Share links
Share links can be added in `datapakage.resources[0].api` attribute.
There is common limit of up 2000 characters on URL strings. Our share links contain the entire application store tree, which is often larger than 2000 characters, in which the application state cannot be shared via URL. Thems the breaks.
## Translations
### Add a Translation To Data Explorer
To add a translation to a new language to the data explorer you need to:
1. clone the repository you need to update
```bash
git clone git@gitlab.com:datopian/data-explorer.git
```
2. go to `src/i18n/locales/` folder
3. add a new sub-folder with locale name and the new language json file (e.g. `src/i18n/locales/ru/translation.json`)
4. add the new file to resources settings in `i18n.js`:
`src/i18n/i18n.js`:
```javascript
import en from './locales/en/translation.json'
import da from './locales/da/translation.json'
import ru from './locales/ru/translation.json'
...
ru: {
translation: {
...require('./locales/ru/translation.json'),
...
}
},
...
```
5. create a merge request with the changes
### Add a translation To a Component
Some strings may come from a component, to add translation for them will require some extra steps, e.g. datapackage-views-js:
1. clone the repository
```bash
https://github.com/datopian/datapackage-views-js.git
```
2. go to `src/i18n/locales/` folder
3. add a new sub-folder with locale name and the new language json file (e.g. `src/i18n/locales/ru/translation.json`)
4. add the new file to resources settings in `i18n.js`:
`src/i18n/i18n.js`:
```javascript
...
import ru from './locales/ru/translation.json'
...
resources: {
...
ru: {translation: ru},
},
...
```
5. create a pull request for datapackage-views-js
6. get the new datapackage-views-js version after merging (e.g. 1.3.0)
7. clone data-explorer
8. upgrade the data-explorer's datapackage-views-js dependency with the new version:
a. update package.json
b. run `yarn install`
9. add the component's translations path to Data Explorer:
```javascript
import en from './locales/en/translation.json'
import da from './locales/da/translation.json'
import ru from './locales/ru/translation.json'
...
ru: {
translation: {
...require('./locales/ru/translation.json'),
...require('datapackage-views-js/src/i18n/locales/ru/translation.json'),
}
},
...
```
10. create a merge request for data-explorer
### Testing a Newly Added Language
To see your language changes in Data Explorer you can run `yarn start` and change the language cookie of the page (`defaultLocale`):
![i18n Cookie](/static/img/docs/dms/data-explorer/i18n-cookie.png)
### Language detection
Language detection rules are determined by `detection` option in `src/i18n/i18n.js` file. Please edit with care, as other projects may already depend on them.
## Embedding in CKAN NG Theme
### Copy bundle files to theme's `public` directory
```bash
$ cp data-explorer/build/static/js/*.js frontend-v2/themes/your_theme/public/js
$ cp data-explorer/build/static/js/*.map frontend-v2/themes/your_theme/public/js
$ cp data-explorer/build/static/css/* frontend-v2/themes/your_theme/public/css
```
#### Note on app bundles
The bundled resources have a hash in the filename, for example `2.a3e71132.chunk.js`
During development it may be preferable to remove the hash from the file name to avoid having to update the script tag during iteration, for example
```bash
$ mv 2.a3e71132.chunk.js 2.chunk.js
```
A couple caveats:
* The `.map` file names should remain the same so that they are loaded properly
* Browser cache may need to be invalidated manually to ensure that the latest script is loaded
### Require Data Explorer resources in NG theme template
In `/themes/your-theme/views/your-template-wth-explplorer.html`
```html
<!-- Everything before the content block goes here -->
{% block content %}
<!-- Data Explorer CSS -->
<link rel="stylesheet" type="text/css" href="/static/css/main.chunk.css">
<link rel="stylesheet" type="text/css" href="/static/css/2.chunk.css">
<!-- End Data Explorer CSS -->
```
### Configure datapackage
```htmlmixed=
<!-- where datapackage is -->
<srcipt>
const datapackage = {
resources: [{resource}], // single resource for this view
views: [...], // can be 3 views aka widgets
controls: {
showChartBuilder: true,
showMapBuilder: true
}
}
</srcipt>
```
### Add data-explorer tags to the page markup
Each Data Explorer instance needs a corresponding `<div>` in the DOM. For example:
```html
{% for resource in dataset.resources %}
<div class="data-explorer" id="data-explorer-{{ loop.index - 1 }}" data-datapackage='{{ dataset.dataExplorers[loop.index - 1] | safe}}'></div>
{% endfor %}
```
Note that each container div needs the following attributes:
* `class="data-explorer"` (All explorer divs should have this class)
* `id="data-explorer-0"` (1, 2, etc...)
* `data-datapackage=`{'{JSON CONFIG}'}` (A valid JSON configuration)
### Add data explorer scripts to your template
```html
<script type="text/javascript" src="/static/js/runtime~main.js"></script>
<script type="text/javascript" src="/static/js/2.chunk.js"></script>
<script type="text/javascript" src="/static/js/main.chunk.js"></script>
```
*NOTE* that the scripts should be loaded _after the container divs are in the DOM, typically by placing the `<script>` tags at the bottom of the footer_
See [a real-world example here](https://gitlab.com/datopian/clients/ckan-montreal/blob/master/views/showcase.html)
## New builds
In order to build files for production, run `npm run build` or `yarn build`.
You need to have **node version >= 12** in order to build files. Otherwise a 'heap out of memory error' gets thrown.
### Component changes
If the changes involve component updates that live in separate repositories make sure to upgrade them too before building:
1. Prepare the component with dist version (eg run yarn build:package in the component repo, see [this](/docs/dms/data-explorer/datastore-query-builder#release) for an example)
2. run `yarn add <package>` to get latest changes, e.g. `yarn add @datopian/datastore-query-builder` (do not use `yarn upgrade`, see here on why https://github.com/datopian/data-explorer/issues/28#issuecomment-700792966)
3. you can verify changes in `yarn.lock` - there should be the latest component commit id
4. `yarn build` in data-explorer
### Testing not yet released component changes
If there are some changes to be tested that are not ready to be released in a component the best option is to use
cosmos directly in the component repository, but if that is not enough you can add the dependency from a branch
temporarily:
```
yarn add https://github.com/datopian/datastore-query-builder.git#<branch name>
```
## Appendix: Design
See [Data Explorer Design page &raquo;](/docs/dms/data-explorer/design/)

View File

@ -1,109 +0,0 @@
---
sidebar: auto
---
# Datastore Query Builder
This project was bootstrapped with [Create React App](https://github.com/facebook/create-react-app).
The code repository is located at github - https://github.com/datopian/datastore-query-builder.
## Usage
Install it:
```
yarn add @datopian/datastore-query-builder
```
Basic usage in a React app:
```JavaScript
import React from 'react'
import { QueryBuilder } from 'datastore-query-builder'
export const MyComponent = props => {
// `resource` is a resource descriptor that must have 'name', 'id' and
// 'schema' properties.
// `action` - this should be a Redux action that expects back the resource
// descriptor with updated 'api' property. It is up to your app to fetch data.
return (
<QueryBuilder resource={resource} filterBuilderAction={action} />
)
}
```
Note that this app doesn't fetch any data - it only builds API URI based on user
selection.
It's easier to learn by examples provided in the `/__fixtures__/` directory.
## Features
* Date Picker - if the resource descriptor has a field with `date` type it will be displayed as a date picker element:
![Date Picker](/static/img/docs/dms/data-explorer/date-picker.png)
## Available Scripts
In the project directory, you can run:
### `yarn cosmos` or `npm run cosmos`
Runs dev server with the fixtures from `__fixtures__` directory. Learn more about `cosmos` - https://github.com/react-cosmos/react-cosmos
### `yarn start` or `npm start`
Runs the app in the development mode.<br/>
Open [http://localhost:3000](http://localhost:3000) to view it in the browser.
The page will reload if you make edits.<br/>
You will also see any lint errors in the console.
### `yarn test` or `npm test`
Launches the test runner in the interactive watch mode.<br/>
See the section about [running tests](https://facebook.github.io/create-react-app/docs/running-tests) for more information.
### `yarn build:package` or `npm run build:package`
Run this to compile your code so it is installable via yarn/npm.
### `yarn build` or `npm run build`
Builds the app for production to the `build` folder.<br/>
It correctly bundles React in production mode and optimizes the build for the best performance.
The build is minified and the filenames include the hashes.<br/>
Your app is ready to be deployed!
See the section about [deployment](https://facebook.github.io/create-react-app/docs/deployment) for more information.
## Release
When releasing a new version of this module, please, make sure you've built compiled version of the files:
```bash
yarn build:package
# Since this a release, you need to change version number in package.json file.
# E.g., this is a patch release so my `0.3.6` will become `0.3.7`.
# Now commit the changes
git add dist/ package.json
git commit -m "[v0.3.7]: your commit message."
```
Next, you need to tag your commit and add some descriptive message about the release:
```bash
git tag -a v0.3.7 -m "Your release message."
```
Now you can push your commits and tags:
```bash
git push origin branch && git push origin branch --tags
```
The tag will initiate a Github action that will publish the release to NPM.

View File

@ -1,145 +0,0 @@
# Data Explorer Design
>[!note]
Design sketches from Aug 2019. This remains a work in progress though a good part was implemented in the new [Data Explorer](/docs/dms/data-explorer).
## Job Stories
[Preview] As a Data Consumer I want to have a sense of what data there is in a dataset's resources before I download it (or download an individual resource) so that I don't waste my time and get interested
[Preview] As a Data Consumer I want to view (the most important contents) of a resource without downloading it and opening it so i save time (and don't have to get specialist tools)
[Preview - with tweaks] As a Data Consumer I want to be able to display tabular data with geo info on a map so that I can see it in an easily comprehensible way
[Explorer] As a Viewer I want to explore (filter, facet?) a dataset so I can find the data i'm looking for ...
[Explorer - map] As a viewer i want to filt4er down the data i dispaly on the map so that I can see the data i want
[Map / Dash Creator] As a Publisher i want to create a custom map or dashboard so that I can display my data to viewers powerfully
[View the data] As a User, I want to see my city related data (eg, crime, road accidents) on the map so that:
* I can easily understand which area is safe for me.
* I can evaluate different neighbourhoods when planning a move.
As a User from city council, I want to see my city related data (eg, traffic) on the map so that I can take better actions to improve the city (make it safe for citizens).
> is this self-service created, a custom map made by publisher, an auto-generated map (e.g. preview)
[Data Explorer] As a Power User I want to do SQL queries on the datastore so that I can dsiplay / download the results and get insight without having to download into my own tool and do that wrangling
## Architecture
```mermaid
graph LR
subgraph "Filter UI"
simpleselectui[Filter by columns explicitly]
sqlfilterui[SQL UI]
richselectui[Filter and Group by etc in a UI]
end
subgraph Renderers
tableview[Table Renderer]
chartview[Chart Renderer]
mapview[Map Renderer]
end
subgraph Builders
datasetselector[Select datasets to use, potentially with combination]
chartbuilder[Chart Builder - UI to create a chart]
mapbuilder[Map Builder]
end
subgraph APIs
queryapi[Abstract Query API implemented by others]
datastoreapijs[DataStore API wrapper - returns a Data Package with cached data and query as url?]
datajs[Data Package - Data in Memory: Dataset and Table objects]
datajsquery[Query Wrapper Around Dataset with cached data in memory]
end
classDef todo fill:#f9f,stroke:#333,stroke-width:1px
classDef working fill:#00ff00,stroke:#333,stroke-width:1px
class chartbuilder todo;
class chartview,tableview,mapview,simpleselectui working;
```
Filter UI updates Redux Store using a one-way data binding as the ONLY way to modify application state or component state (except internal state of components as needed):
```mermaid
graph TD
FilterUI_Update --> ReduxACTION:UpdateFilters
ReduxACTION:UpdateFilters --> RefetchData
ReduxACTION:UpdateFilters --> updateUIState
RefetchData --store.workingData--> UpdateStore
updateUIState --store.uiState--> UpdateStore
UpdateStore --> RerenderApp
```
## Interfaces to define
```
dataset => data package
query[Query - source data package + cached data + filter state]
workingdataset[Working Dataset in Memory]
chartconfig[]
mapconfig[]
```
### Redux store / top level state
```javascript=
queryuistate: {
// url / data package rarely changes during lifetime of explorer usually
url: datastore url / or an original data package
filters: ...
sqlstatement:
}
// list of datasets / resources we are working with ...
datasets/resources: [
]
layout: [
// this is the switcher layout where you only see one widget at a time
layouttype: chooser; // chooserr aka singleton, stacked, custom ...
views: [list of views in their order]
]
views: [
{
type:
resource:
char
}
]
```
## Research
### Our One
![](https://i.imgur.com/XAdHq26.jpg)
### Redash
![](https://i.imgur.com/6JssnLA.png)
### Metabase
https://github.com/metabase/metabase
![](https://i.imgur.com/bOjIKdE.png)
### CKAN Classic
![](https://i.imgur.com/tGdupkz.png)
![](https://i.imgur.com/fDtjGSk.png)
### Rufus' Data Explorer (2014)
![](https://i.imgur.com/XJMHRes.png)

View File

@ -1,18 +0,0 @@
# Data Lake
A data lake is a repository -- typically a large one -- for storing data of many types. They are more flexible (less structured) than their predecessor Data Warehouses. At their crudest they are little more than raw storage with an organizational structure plus, maybe, a catalog. At their more sophisticated they can become an entire data management infrastructure.
The flexibility of the data lake concept is both its advantage and a limitation: almost any data architecture that includes collecting organizational data together could be described as data lake.
At a practical level, the flexibility can become a limitation in that **data lakes become data swamps**: the lack of structure for data lakes often limit the usability of the lake: data cannot be found or is not of adequate quality. As ThoughtWorks note: "Many enterprises failed to generate a return on their investment because they had quality issues with the data in their lakes or had invested significant sums in creating their lakes before identifying use cases."[^1]
[^1]: https://www.thoughtworks.com/decoder/data-lake
## Schematic overview of a Data Lake Architecture
<img src="https://docs.google.com/drawings/d/e/2PACX-1vThZmi5ok8VNaM03Vj5RQHJRQiZJIkrxaU08vpG_T_kcElFQDCO7bZVO1FJzcpR2X8wfKZVWdWXpLUz/pub?w=1159&amp;h=484" />
## References
* https://www.thoughtworks.com/decoder/data-lake
* https://martinfowler.com/articles/data-monolith-to-mesh.html

View File

@ -1,287 +0,0 @@
# Data Portals
> *Data Portals have become essential tools in unlocking the value of data for organizations and enterprises ranging from the US government to Fortune 500 pharma companies, from non-profits to startups. They provide a convenient point of truth for discovery and use of an organization's data assets. Read on to find out more.*
## Introduction: Data Portals are Gateways to Data
A Data Portal is a gateway to data. That gateway can be big or small, open or restricted. For example, data.gov is open to everyone, whilst an enterprise "intra" data portal is restricted to that enterprise (and perhaps even to certain people within it).
A Data Portal's core purpose is to enable the rapid discovery and use of data. However, as a flexible, central point of truth on an organizations data assets, a Data Portal can become essential data infrastructure and be extended or integrated to provide many additional features:
* Data storage and APIs
* Data visualization and exploration
* Data validation and schemas
* Orchestration and integration of data
* Data Lake coordination and organization
The rise of Data Portals reflect the rapid growth in the volume and variety of data that organizations hold and use. With so much data available internally (and externally) it is hard for users to discover and access the data they need. And with so many potential users and use-cases it is hard to anticipate what data will be needed, when.
Concretely: how does Jane in the new data science team know that Geoff in accounting has the spreadsheet she needs for her analysis for the COO? Moreover, it is not enough just to have a dataset's location: if users are easily to discover and access data it has to be suitably organized and presented.
Data portals answer this need: by making it easy to find and access data, a data portal helps solve these problems. As a result, data portals have become essential tools for organizations to bring order to the "data swamp" and unlock the value of their data assets.[^1]
[^1]: The nature of the problem that Data Portals solve (i.e. bringing order to diverse, distributed data assets) explains why data portals first arose in Government and as *open* data portals. Government had lots of useful data, much of it shareable, but poorly organized and strewn all over the place. In addition, much of the value of that data lay in unexpected or unforeseen uses. Thus, Data Portals in their modern form started in Government in the mid-late 2000s. They then spread into large companies and then with the spread of data into all kinds of organizations big and small.
## Why Data Portals?
### Data Variety and Volume have Grown Enormously
The volume and variety of data available has grown enormously. Today, even small organizations have dozens of data assets ranging from spreadsheets in their cloud drive to web analytics. Meanwhile, large organizations can have an enormous -- and bewildering -- amount and variety of data ranging from Hadoop clusters and data warehouses to CRM systems plus, of course, plenty of internal spreadsheets, databases etc.
In addition to this diversity of *supply* there has been a huge growth in the potential and diversity of *demand* in the form of users and use cases. Self-service business intelligence, machine learning and even tools like google spreadsheets have democratized and expanded the range of users. Meanwhile, data is no longer limited to a single purpose: much of the *new* value for data for enterprises comes from unexpected or unplanned uses and from combining data across divisions and systems.
### This Creates a Challenge: Getting Lost in Data
As organizations seek to reap the benefits of this data cornucopia they face a problem: with so much data around its easy to get lost -- or even just not know that data even exists. And, as supply and demand have expanded and diversified it has got both harder and more important to match them up.
The addition of data integration and data engineering can actually makes this problem even worse -- do we need to create this new dataset X from Y and Z or do we already have that somewhere? And how can people find X once we have created it? Is X a finished a dataset that people can rely on or is it a one-off. Even if a one-off do we want to record that we created this kind of dataset so we can create it again in the future if we need it?[^lakes]
### Data Portals are a Platform that Connect Supply and Demand for Data
By making it easy to find and access data, a data portal helps address all these problems. As a platform it connects creators and users of data in a single place. As a single source of base metadata it provides essential infrastructure for data integration. By acting as a central repository of data it enables new forms of publication and sharing. Data Portals therefore play a central role in unlocking the value of data for organizations.
[^lakes]: Ditto for data lakes: the growth of data lakes have made data portals (and metadata management) even more important because without them your data lake quickly turns into a data swamp where data is hard to locate and even if found lacks esssential metadata and structure that would make make it usable.
### Data Portals as the First Step in an Agile Data Strategy
Data portals also play an initial, concrete step in data engineering / data strategy. Suppose you are a newly arrived CDO.
The first questions you will be asking are things like: what data do we have, where is it, what state is it in? (And secondarily, what data use cases do we have? Who has them? Do they match against the data we have?).
This immediately leads to a need do a data inventory. And for a data inventory you need a tool to hold the results (and structure this) = a data portal.
```mermaid
graph TD
cdo[Chief Data Officer]
what[What data do we have?]
inventory["We need a data inventory"]
portal[We need a data portal / catalog]
cdo --> what
what --> inventory
inventory --> portal
```
Even in more sophisticated situations, a data portal is a great place to start. Suppose you are a newly arrived CDO at an organization with an existing data lake and a rich set of data integration and data analytics workflows.
There is a good chance your data lake is rapidly becoming a data swamp and there is nothing to track dependencies and connections in those data and analytics pipelines. Again a simple data portal is a great place to start in bringing some order to this: lightweight, vendor-independent and (if you choose CKAN) open source infrastructure that gives you a simple solution for collecting and tracking metadata across datasets and data workflows.
### Summary: Data Portals make Data Discoverable and Accessible and provide a Foundation for Integration
In summary, Data Portals deliver value in three distinct, interlocking and incremental ways by:
* Making data discoverable: ranging from Excel files to Hadoop clusters. The portal does this by providing the infrastructure *and* process for reliable, cross-system metadata management and access (via humans *and* machines)
* Make data accessible: whether its an Excel file or a database cluster, the portal's combination of common metadata, data showcases and data APIs make data easily and quickly accessible to technical and non-technical users. Data can now be previewed and even explored in one location prior to use.
* Making data reliable and integrable: as central store of metadata and data access points, the data portal can be naturally used as a starting point for enriching data with data dictionaries (what does column `custid` mean?), data mappings (this column in this data file is a customer ID and the customer master data is in this other dataset there), and data validation (does this column of dates contain valid dates, are some of them out of range?)
In addition, in terms of proper data infrastructure and data engineering, a Data Portal provides both initial starting point, simple scaffolding and a solid foundation. It is an organizing point and rosetta stone for data discovery and metadata.
* TODO: this really links into the story of how to start doing data engineering / building a data lake / doing agile data etc etc
* For example, suppose you want to do some non-trivial business intelligence. You'll need a list of the datasets you'll need -- maybe sales, plus analytics, plus some public economic data. Where are you going to track those datasets? Where are you going to track the resulting datasets you produce?
* For example, suppose your data engineering team are building out data pipelines. These pipelines pull in a variety of datasets, integrate and transform them and then save the results. How are they going to track what datasets they are using and what they have produced? They are going to need a catalog. Rather than inventing their own (the classic "json file in git or spreadsheet in google docs etc" you want them to use a proper catalog (or integrate with your existing one).
* Using an open source service-oriented data portal framework like CKAN you can rapidly integrate and scale out your data orchestration. It provides a "small pieces, loosely joined" approach to developing your data infrastructure starting from the basics: what datasets do you have, what datasets do you want to create?
## What does a Data Portal do?
### A Data Portal provides a Catalog
In its most basic essence, a Data Portal is a catalog of datasets. Even here there are degrees: at its simplest a catalog is just a list of dataset names and links; whilst more sophisticated catalogs will have elaborate metadata on each dataset.
### And Much More ...
Along with the essential basic catalog features, modern portals now incorporate an extensive range of functionality for organizing, structuring and presenting data including:
* **Publication workflow and metadata management**: rich editing interfaces and workflows (for example, approval steps), bulk editing of metadata etc
* **Showcasing and presentation of datasets** extending to interactive exploration. For example, if a dataset contains an Excel file, in addition to linking to that file the portal will also display the contents in a table, allow users to create visualizations, and even to search and explore the data
* **Data storage and APIs**: as well as cataloging metadata and linking to data stored elsewhere, data portals can also store data. Building off this, data portals can provide "data APIs" to the underlying data to complement direct access and download. These APIs make it much quicker and easier for users to build their own rich applications and analytics workflows.
* **Permissions**: fine-grained permissions to control access to data and related materials.
* **Data ingest and transformation:** ETL style functionality e.g. for bulk harvesting metadata, or preparing or excerpting data for presentation (for example, loading data to power data APIs)
Moreover, as a flexible, central point of truth on an organizations data assets a Data Portal can become the foundations for broader data infrastructure and data management, for example:
* Orchestration of data integration: as a central repository of metadata, data portals are perfectly placed to integrate with wider data engineering and ETL workflows
* Data quality and provenance tracking
* Data Lake coordination and organization
## What are the main features of a Data Portal?
Focus on "functional" features vs technical features.
E.g. each technical feature may require one or more of these technical features:
* Storage
* API
* Frontend
* Admin UI (WebUI, possibly CLI, Mobile etc)
### High Level Overview
```mermaid
graph LR
perdataset --> storage[Store Data]
perdataset --> metadata[Store Metadata]
perdataset --> versioning
perdataset --> events[Notifications of Life Cycle events]
perdataset --> basic[Basic Access Control]
permissions --> auth[Identify]
permissions --> authz[Authorization]
permissions --> permintr[Permissions Integration]
hub --> showcase["(Pre)Viewing the Dataset"]
hub --> discovery[Discovery]
hub --> orgs[Users, Teams and Ownership]
hub --> tags[Tags, Themes]
hub --> audit[Audit and Notifications]
integration[Data Integration] --> pipelines
integration --> harvesting
```
### Coggle Detailed Overview
https://coggle.it/diagram/Xiw2ZmYss-ddJVuK/t/data-portal-feature-breakdown
<iframe width='853' height='480' src='https://embed.coggle.it/diagram/Xiw2ZmYss-ddJVuK/b24d6f959c3718688fed2a5883f47d33f9bcff1478a0f3faf9e36961ac0b862f' frameborder='0' allowfullscreen></iframe>
### Detailed Feature Breakdown
```mermaid
graph LR
dms[Basics]
dmsplus["Plus"]
cms[CMS]
theming[Theming]
permissions[Permissions]
datastore[Data API]
monitoring[Monitoring]
usage[Usage Analytics]
monitoring[Monitoring]
harvesting[Harvesting]
etl[ETL]
blog[Blog]
contact[Contact Page]
help[Support]
newsletter[Newsletter]
metadata[Metadata]
showcase[Showcase]
activity[Activity Streams]
search[Data Search]
catalogsearch[Catalog Search]
multi[Multi-language metadata]
resource[Resource previews]
xloader[xLoader]
datapusher[Data Pusher]
revision[Revisioning]
explorer[Data Explorer]
datavalidation[Data Validation]
filestore[FileStore]
siteadmin[Site Admin]
dms --> metadata
dms --> activity
dms --> catalogsearch
dms --> showcase
dms --> resource
dms --> multi
dms --> filestore
dms --> theming
dms --> i18n
dms --> siteadmin
dmsplus --> permissions
dmsplus --> revision
dmsplus --> datastore
dmsplus --> monitoring
dmsplus --> usage
dmsplus --> search
dmsplus --> explorer
cms --> blog
cms --> contact
cms --> help
cms --> newsletter
etl --> datapusher
etl --> xloader
etl --> harvesting
etl --> datavalidation
```
* Theming - customizing the look and feel of the portal
* i18n
* CMS - e.g., news/ideas/about/docs. Learn about CMS options - [CMS](/docs/dms/data-portals/cms).
* Blog
* Contact page?
* Help / Support / Chat
* Newsletter
* DMS Basic - Catalog: manage/catalog multiple formats of data
* Activity Streams
* Data Showcase (aka Dataset view page) -
* Resource previews
* Metadata creation and managemet
* Multi-language metadata
* Data import and storage
* Storing data
* Data Catalog searching
* Data searching
* Multiple Formats of data
* Tagging and Grouping of Datasets
* Organization as publishers and teams
* DMS Plus
* Permissions: identity, authentication, accounts and authorization (including "teams/groups/orgs")
* Revisioning of data and metadata
* DataStore and Data API: ....
* Monitoring: who is doing what, audit log etc
* Usage Analytics: e.g. number of views, amount of downloads, recent activity
* ETL: automated metadata and data import and processing (e.g. to data store), data transformation ...
* Harvesting: metadata and data harvesting
* DataPusher
* xLoader
* (Data) Explorer: Visualizations and Dashboards
Data Validation
DevOps
* CKAN Cloud: multi-instance deployment and management
* Backups / Disaster recovery
Not sure they merit an item ...
* Cross Platform
* Data Sharing: A place to store data, with a permanent link to share to the public.
* Discussions
* RSS
* Multi-file download
## CKAN the Data Portal Software
CKAN is the leading data portal software.
It is both usable out of the box and can also be utilized as a powerful framework for creating tailored solutions.
CKAN's combintation of open source codebase and enterprise support make it uniquely attractive for organizations looking to build customized, enterprise-grade solutions.
## Appendix
TODO: From Data Portal to DataHub (or Data Management System).
### Is a Data Catalog the same as a Data Portal? (Yes)
Is a data catalog the same as a data portal? Yes. Data Portals are the evolution of data catalogs.
Data Portals were originally called a variety of names including "Data Catalog". As catalogs grew in features they have evolved into a full portal.
### Open Data Portals and Internal Data Portals.
Many initial data portals were "open" or public: that is anyone could access them -- and the data they listed. This reflected the fact that these data portals were set up by governments seeking to maximize the value of their data by sharing it as widely as possible.
However, there is no reason a data portal need be "open". In fact, data portals internal to an enterprise are usually restricted to the organization or even specific teams within the enterprise.

View File

@ -1,158 +0,0 @@
# DataFrame
Designing a dataframe.js - and understanding data libs and data models in general.
TODO: integrate https://github.com/datopian/dataframe.js - my initial review from ~ 2015 onwards.
## Introduction
Conceptually a data library consists of:
* A data model i.e. a set of classes for holding / describing data e.g. Series (Vector/1d array), DataFrame (Table/2d array) (and possibly higher dim arrays)
* Tooling
* Operations e.g. group by, query, pivot etc etc
* Import / Export: load from csv, sql, stata etc etc
## Our need
We need to build tools for wrangling and presenting data ... that are ...
* focused on smallish data
* run in the browser and/or are lightweight/easy to install
Why? Because ...
* We want to build easy to use / install applications for non-developers (so they aren't going to use pandas or a jupyter notebook PLUS they want a UI PLUS probably not big data (or if it is we can work with a sample))
* We're often using these tools in web applications (or in e.g. desktop app like electron)
Discussion
* Could we not have browser act as thin client and push code to some backend ...? Yes we could but that means a whole other service ...
What we want: Something like openrefine but running in the browser ...
### Why not just use R / Pandas
Context: R, Pandas are already awesome. In fact, super-awesome. And they have huge existing communities and ecosystems.
Furthermore, not only do they do data analysis (so all the data science folks are using) but they are also pretty good for data wrangling (esp pandas)
So, we'd heavily recommend these (esp pandas) if you are developer (and doing work on your local machine).
However, ...
* if you're not a developer they can be daunting (even wrapped up in a juypyter notebook).
* if you are a developer and actually doing data engineering there are some issues
* pandas is a "kitchen-sink" of a library and depends on numpy. This makes it a heavy-weight dependency and harder to put into data pipelines and flows
* monolithic nature makes them hard to componentize ...
## Pandas
### Series
https://pandas.pydata.org/pandas-docs/stable/getting_started/dsintro.html#series
> Series is a one-dimensional labeled array capable of holding any data type (integers, strings, floating point numbers, Python objects, etc.). The axis labels are collectively referred to as the index. The basic method to create a Series is to call:
>
> ``>>> s = pd.Series(data, index=index)``
* Series is a 1-d array with the convenience of labelling each cell in the array with the index (which defaults to 0...n if not specified).
* This allows you to treat Series as an array *and* a dictionary
* You can give it a name "Series can also have a name attribute:
&nbsp;
`s = pd.Series(np.random.randn(5), name='something')`"
### DataFrame
https://pandas.pydata.org/pandas-docs/stable/getting_started/dsintro.html
> DataFrame is a 2-dimensional labeled data structure with columns of potentially different types. You can think of it like a spreadsheet or SQL table, or a dict of Series objects. It is generally the most commonly used pandas object. Like Series, DataFrame accepts many different kinds of input:
### Higher dimensional arrays
Not supported. See xarray.
## XArray
Comment: mature and well thought out. Exists to generalize pandas to higher levels.
http://xarray.pydata.org/en/stable/ => multidimensional arrays in pandas
> xarray has two core data structures, which build upon and extend the core strengths of NumPy and pandas. Both data structures are fundamentally N-dimensional:
>
> DataArray is our implementation of a labeled, N-dimensional array. It is an N-D generalization of a pandas.Series. The name DataArray itself is borrowed from Fernando Perezs datarray project, which prototyped a similar data structure.
>
> Dataset is a multi-dimensional, in-memory array database. It is a dict-like container of DataArray objects aligned along any number of shared dimensions, and serves a similar purpose in xarray to the pandas.DataFrame.
(Personally not sure about the analogy: Dataset is like a collection of series *or* DataFrames)
## NTS
* Pandas 2: https://dev.pandas.io/pandas2/ - https://github.com/pandas-dev/pandas2 (from 2017 for pandas2)
* Pandas 2 never happened https://github.com/pandas-dev/pandas2 (stalled in 2017 ...). May happen in 2021 according to this milestone for it https://github.com/pandas-dev/pandas/milestone/42
## Inbox
* Out-of-Core DataFrames for Python, ML, visualize and explore big tabular data at a billion rows per second. 3.2 ⭐
* https://ray.io/ - distributed computing in python (??)
* Seems to be an alternative / competitor to ray but more general (dask is very oriented to scaling pandas style stuff)
* https://modin.readthedocs.io/en/latest/ - a way to convert pandas to run in parallel "Scale your pandas workflow by changing a single line of code"
* https://github.com/reubano/meza - meza is a Python library for reading and processing tabular data. It has a functional programming style API, excels at reading/writing large files, and can process 10+ file types.
* Quite a few similarities to frictionless data type stuff
* Mainly active 2015-2017 afaict and last commit in 2018
* https://github.com/atlanhq/camelot PDF Table Extraction for Humans
* http://blaze.pydata.org/ - seems inactive since 2016 (according to blog) and github repos look quiet since ~ 2016
* Datashape - https://datashape.readthedocs.io/en/latest/overview.html - Datashape is a data layout language for array programming. It is designed to describe in-situ structured data without requiring transformation into a canonical form.
* Dask: https://dask.org - "Dask natively scales Python. Dask provides advanced parallelism for analytics, enabling performance at scale for the tools you love". Was part of Blaze and now split out as a separate project. this is still *very* active (in fact main maintainer formed a consulting company for this in 2020)
* https://github.com/dask/dask "Parallel computing with task scheduling"
* odo - https://odo.readthedocs.io/en/latest/ - https://github.com/blaze/odo - Odo: Shapeshifting for your data
> odo takes two arguments, a source and a target for a data transfer.
>
> ```
> >>> from odo import odo
> >>> odo(source, target) # load source into target
> ```
>
> It efficiently migrates data from the source to the target through a network of conversions.
### Blaze
The Blaze ecosystem is a set of libraries that help users store, describe, query and process data. It is composed of the following core projects:
* Blaze: An interface to query data on different storage systems
* Dask: Parallel computing through task scheduling and blocked algorithms
* Datashape: A data description language
* DyND: A C++ library for dynamic, multidimensional arrays
* Odo: Data migration between different storage systems
## Appendix: JS "DataFrame" Libraries
A list of existing libraries.
*Note: when we started research on this in 2015 there were none that we could find so a good sign that they are developing.*
* https://github.com/opensource9ja/danfojs 274⭐ - ACTIVE Last update Aug 2020
* https://github.com/StratoDem/pandas-js 280⭐ - INACTIVE last update sep 2017
* https://github.com/fredrick/gauss 428⭐ - INACTIVE last update 2015 - JavaScript statistics, analytics, and data library - Node.js and web browser ready
* https://github.com/Gmousse/dataframe-js 283⭐ - INACTIVE? started in 2016 and largely inactive since 2018 (though minor update in early 2019)
* dataframe-js provides another way to work with data in javascript (browser or server side) by using DataFrame, a data structure already used in some languages (Spark, Python, R, ...).
* Comment: support browser and node etc. Pretty well structured. A long way from Pandas still.
* https://github.com/osdat/jsdataframe 26⭐ - INACTIVE started in 2016 and not much activity since 2017. Seems fairly R oriented (e.g. melt)
* Jsdataframe is a JavaScript data wrangling library inspired by data frame functionality in R and Python Pandas. Vector and data frame constructs are implemented to provide vectorized operations, summarization methods, subset selection/modification, sorting, grouped split-apply-combine operations, database-style joins, reshaping/pivoting, JSON serialization, and more. It is hoped that users of R and Python Pandas will find the concepts in jsdataframe quite familiar.
* https://github.com/maxto/ubique 91⭐ - ABANDONED last update in 2015 and stated as discontinued. A mathematical and quantitative library for Javascript and Node.js
* https://github.com/misoproject/dataset 1.2k⭐️ - now abandonware as no development since 2014, site is down (and maintainers seem unresponsive) (was a nice project!)
Other ones (not very active or without much info):
* https://github.com/walnutgeek/wdf - 1⭐ "web data frame" last commit in 2014 http://walnutgeek.github.io/wdf/DataFrame.html
* https://github.com/cjroth/dataframes
* https://github.com/jpoles1/dataframe.js
* https://github.com/danrobinson/dataframes
### References
* https://stackoverflow.com/questions/30610675/python-pandas-equivalent-in-javascript/43825646 (has a community wiki section)
* https://www.man.com/maninstitute/short-review-of-dataframes-in-javascript (2018) - pretty good review in june 2018. As it points there is no clear solution.

View File

@ -1,11 +0,0 @@
# DataHub Documentation
Welcome to the DataHub documentation.
DataHub is a platform for *people* to **store, share and publish** their data, **collect, inspect and process** it with **powerful tools**, and **discover and use** data shared by others.
Our focus is on data wranglers and data scientists: those who use automate their work with data using code and command line tools rather than editing it by hand (as, for example, many analysts do in Excel). Think people who use Python vs people who use Excel for data work.
Our goal is to provide simplicity *and* power.
[Developer Docs &raquo;](/docs/dms/datahub/developers) {'<3'} Python, JavaScript and data pipelines? Start here!

View File

@ -1,99 +0,0 @@
# Developers
This section of the DataHub documentation is for developers. Here you can learn about the design of the platform and how to get DataHub running locally or on your own servers, and the process for contributing enhancements and bug fixes to the code.
[![Gitter](https://img.shields.io/gitter/room/frictionlessdata/chat.svg)](https://gitter.im/frictionlessdata/chat)
## Internal docs
* [API](/docs/dms/datahub/developers/api)
* [Deploy](/docs/dms/datahub/developers/deploy)
* [Platform](/docs/dms/datahub/developers/platform)
* [Publish](/docs/dms/datahub/developers/publish)
* [User Stories](/docs/dms/datahub/developers/user-stories)
* [Views Research](/docs/dms/datahub/developers/views-research)
* [Views](/docs/dms/datahub/developers/views)
## Repositories
We use following GitHub repositories for DataHub platform:
* [DEPLOY][deploy] - Automated deployment
* [FRONTEND][frontend] - Frontend application in node.js
* [ASSEMBLER][assembler] - Data assembly line
* [AUTH][auth] - A generic OAuth2 authentication service and user permission manager.
* [SPECSTORE][specstore] - API server for managing a Source Spec Registry
* [BITSTORE][bitstore] - A microservice for storing blobs i.e. files.
* [RESOLVER][resolver] - A microservice for resolving datapackage URLs into more human readable ones
* [DOCS][docs] - Documentations
[deploy]: https://github.com/datahq/deploy
[frontend]: https://github.com/datahq/frontend
[assembler]: https://github.com/datahq/assembler
[auth]: https://github.com/datahq/auth
[specstore]: https://github.com/datahq/specstore
[bitstore]: https://github.com/datahq/bitstore
[resolver]: https://github.com/datahq/resolver
[docs]: https://github.com/datahq/docs
```mermaid
graph TD
subgraph Repos
frontend[Frontend]
assembler[Assembler]
auth[Auth]
specstore[Specstore]
bitstore[Bitstore]
resolver[Resolver]
docs[Docs]
end
subgraph Sites
dhio[datahub.io]
dhdocs[docs.datahub.io]
docs --> dhdocs
end
deploy((DEPLOY))
deploy --> dhio
frontend --> deploy
assembler --> deploy
auth --> deploy
specstore --> deploy
bitstore --> deploy
resolver --> deploy
```
## Install
We use several different services to run our platform, please follow the installation instructions here:
* [Install Assembler](https://github.com/datahq/assembler#assembler)
* [Install Auth](https://github.com/datahq/auth#datahq-auth-service)
* [Install Specstore](https://github.com/datahq/specstore#datahq-spec-store)
* [Install Bitstore](https://github.com/datahq/bitstore#quick-start)
* [Install DataHub-CLI](https://github.com/datahq/datahub-cli#usage)
* [Install Resolver](https://github.com/datahq/resolver#quick-start)
## Deploy
For deployment of the application in a production environment, please see [the deploy page][deploydocs].
[deploydocs]: /docs/dms/deploy
## DataHub CLI
The DataHub CLI is a Node JS lib and command line interface to interact with an DataHub instance.
[CLI code](https://github.com/datahq/datahub-cli)

Some files were not shown because too many files have changed in this diff Show More