Compare commits
2 Commits
alan-turin
...
alan-turin
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
2a59b85b62 | ||
|
|
7c845fe0e3 |
@@ -1,18 +1,6 @@
|
||||
## Intro
|
||||
|
||||
This page catalogues datasets annotated for hate speech, online abuse, and offensive language. They may be useful for e.g. training a natural language processing system to detect this language.
|
||||
|
||||
Its built on top of [PortalJS](https://portaljs.org/), it allows you to publish datasets, lists of offensive keywords and static pages, all of those are stored as markdown files inside the `content` folder.
|
||||
|
||||
- .md files inside `content/datasets/` will appear on the dataset list section of the homepage and be searchable as well as having a individual page in `datasets/<file name>`
|
||||
- .md files inside `content/keywords/` will appear on the list of offensive keywords section of the homepage as well as having a individual page in `keywords/<file name>`
|
||||
- .md files inside `content/` will be converted to static pages in the url `/<file name>` eg: `content/about.md` becomes `/about`
|
||||
|
||||
This is also a Next.JS project so you can use the following steps to run the website locally.
|
||||
|
||||
## Getting started
|
||||
|
||||
To get started first install the npm dependencies:
|
||||
To get started with this template, first install the npm dependencies:
|
||||
|
||||
```bash
|
||||
npm install
|
||||
@@ -25,3 +13,7 @@ npm run dev
|
||||
```
|
||||
|
||||
Finally, open [http://localhost:3000](http://localhost:3000) in your browser to view the website.
|
||||
|
||||
## License
|
||||
|
||||
This site template is a commercial product and is licensed under the [Tailwind UI license](https://tailwindui.com/license).
|
||||
|
||||
@@ -21,7 +21,7 @@ export function Footer() {
|
||||
<Container.Inner>
|
||||
<div className="flex flex-col items-center justify-between gap-6 sm:flex-row">
|
||||
<p className="text-sm font-medium text-zinc-800 dark:text-zinc-200">
|
||||
Built with <a href='https://portaljs.org'>PortalJS 🌀</a>
|
||||
hatespeechdata maintained by <a href='https://github.com/leondz'>leondz</a>
|
||||
</p>
|
||||
<p className="text-sm text-zinc-400 dark:text-zinc-500">
|
||||
© {new Date().getFullYear()} Leon Derczynski. All rights
|
||||
|
||||
@@ -1,5 +0,0 @@
|
||||
---
|
||||
title: About
|
||||
---
|
||||
|
||||
This is an about page, left here as an example
|
||||
22
examples/alan-turing-portal/content/contributing.md
Normal file
22
examples/alan-turing-portal/content/contributing.md
Normal file
@@ -0,0 +1,22 @@
|
||||
---
|
||||
title: Contributing
|
||||
---
|
||||
|
||||
We accept entries to our catalogue based on pull requests to the content folder. The dataset must be avaliable for download to be included in the list. If you want to add an entry, follow these steps!
|
||||
|
||||
Please send just one dataset addition/edit at a time - edit it in, then save. This will make everyone’s life easier (including yours!)
|
||||
|
||||
- Go to the repo url file and click the "Add file" dropdown and then click on "Create new file".
|
||||

|
||||
|
||||
- In the following page type `content/datasets/<name-of-the-file>.md`. if you want to add an entry to the datasets catalog or `content/keywords/<name-of-the-file>.md` if you want to add an entry to the lists of abusive keywords.
|
||||

|
||||
|
||||
- Copy the contents of `templates/dataset.md` or `templates/keywords.md` respectively to the camp below, filling out the fields with the correct data format
|
||||

|
||||
|
||||
- Click on "Commit changes", on the popup make sure you give some brief detail on the proposed change. and then click on Propose changes
|
||||

|
||||
|
||||
- Submit the pull request on the next page when prompted.
|
||||
|
||||
@@ -1,14 +0,0 @@
|
||||
---
|
||||
title: AbuseEval v1.0
|
||||
link-to-publication: http://www.lrec-conf.org/proceedings/lrec2020/pdf/2020.lrec-1.760.pdf
|
||||
link-to-data: https://github.com/tommasoc80/AbuseEval
|
||||
task-description: Explicitness annotation of offensive and abusive content
|
||||
details-of-task: "Enriched versions of the OffensEval/OLID dataset with the distinction of explicit/implicit offensive messages and the new dimension for abusive messages. Labels for offensive language: EXPLICIT, IMPLICT, NOT; Labels for abusive language: EXPLICIT, IMPLICT, NOTABU"
|
||||
size-of-dataset: 14100
|
||||
percentage-abusive: 20.75
|
||||
language: English
|
||||
level-of-annotation: ["Tweets"]
|
||||
platform: ["Twitter"]
|
||||
medium: ["Text"]
|
||||
reference: "Caselli, T., Basile, V., Jelena, M., Inga, K., and Michael, G. 2020. \"I feel offended, don’t be abusive! implicit/explicit messages in offensive and abusive language\". The 12th Language Resources and Evaluation Conference (pp. 6193-6202). European Language Resources Association."
|
||||
---
|
||||
@@ -1,14 +0,0 @@
|
||||
---
|
||||
title: "CoRAL: a Context-aware Croatian Abusive Language Dataset"
|
||||
link-to-publication: https://aclanthology.org/2022.findings-aacl.21/
|
||||
link-to-data: https://github.com/shekharRavi/CoRAL-dataset-Findings-of-the-ACL-AACL-IJCNLP-2022
|
||||
task-description: Multi-class based on context dependency categories (CDC)
|
||||
details-of-task: Detectioning CDC from abusive comments
|
||||
size-of-dataset: 2240
|
||||
percentage-abusive: 100
|
||||
language: "Croatian"
|
||||
level-of-annotation: ["Posts"]
|
||||
platform: ["Posts"]
|
||||
medium: ["Newspaper Comments"]
|
||||
reference: "Ravi Shekhar, Mladen Karan and Matthew Purver (2022). CoRAL: a Context-aware Croatian Abusive Language Dataset. Findings of the ACL: AACL-IJCNLP."
|
||||
---
|
||||
@@ -1,14 +0,0 @@
|
||||
---
|
||||
title: Large-Scale Hate Speech Detection with Cross-Domain Transfer
|
||||
link-to-publication: https://aclanthology.org/2022.lrec-1.238/
|
||||
link-to-data: https://github.com/avaapm/hatespeech
|
||||
task-description: Three-class (Hate speech, Offensive language, None)
|
||||
details-of-task: Hate speech detection on social media (Twitter) including 5 target groups (gender, race, religion, politics, sports)
|
||||
size-of-dataset: "100k English (27593 hate, 30747 offensive, 41660 none)"
|
||||
percentage-abusive: 58.3
|
||||
language: English
|
||||
level-of-annotation: ["Posts"]
|
||||
platform: ["Twitter"]
|
||||
medium: ["Text", "Image"]
|
||||
reference: "Cagri Toraman, Furkan Şahinuç, Eyup Yilmaz. 2022. Large-Scale Hate Speech Detection with Cross-Domain Transfer. In Proceedings of the Thirteenth Language Resources and Evaluation Conference, pages 2215–2225, Marseille, France. European Language Resources Association."
|
||||
---
|
||||
@@ -1,14 +0,0 @@
|
||||
---
|
||||
title: Measuring Hate Speech
|
||||
link-to-publication: https://arxiv.org/abs/2009.10277
|
||||
link-to-data: https://huggingface.co/datasets/ucberkeley-dlab/measuring-hate-speech
|
||||
task-description: 10 ordinal labels (sentiment, (dis)respect, insult, humiliation, inferior status, violence, dehumanization, genocide, attack/defense, hate speech), which are debiased and aggregated into a continuous hate speech severity score (hate_speech_score) that includes a region for counterspeech & supportive speeech. Includes 8 target identity groups (race/ethnicity, religion, national origin/citizenship, gender, sexual orientation, age, disability, political ideology) and 42 identity subgroups.
|
||||
details-of-task: Hate speech measurement on social media in English
|
||||
size-of-dataset: "39,565 comments annotated by 7,912 annotators on 10 ordinal labels, for 1,355,560 total labels."
|
||||
percentage-abusive: 25
|
||||
language: English
|
||||
level-of-annotation: ["Social media comment"]
|
||||
platform: ["Twitter", "Reddit", "Youtube"]
|
||||
medium: ["Text"]
|
||||
reference: "Kennedy, C. J., Bacon, G., Sahn, A., & von Vacano, C. (2020). Constructing interval variables via faceted Rasch measurement and multitask deep learning: a hate speech application. arXiv preprint arXiv:2009.10277."
|
||||
---
|
||||
@@ -1,14 +0,0 @@
|
||||
---
|
||||
title: Offensive Language and Hate Speech Detection for Danish
|
||||
link-to-publication: http://www.derczynski.com/papers/danish_hsd.pdf
|
||||
link-to-data: https://figshare.com/articles/Danish_Hate_Speech_Abusive_Language_data/12220805
|
||||
task-description: "Branching structure of tasks: Binary (Offensive, Not), Within Offensive (Target, Not), Within Target (Individual, Group, Other)"
|
||||
details-of-task: Group-directed + Person-directed
|
||||
size-of-dataset: 3600
|
||||
percentage-abusive: 0.12
|
||||
language: Danish
|
||||
level-of-annotation: ["Posts"]
|
||||
platform: ["Twitter", "Reddit", "Newspaper comments"]
|
||||
medium: ["Text"]
|
||||
reference: "Sigurbergsson, G. and Derczynski, L., 2019. Offensive Language and Hate Speech Detection for Danish. ArXiv."
|
||||
---
|
||||
@@ -11,24 +11,3 @@ We provide a list of datasets and keywords. If you would like to contribute to o
|
||||
If you use these resources, please cite (and read!) our paper: Directions in Abusive Language Training Data: Garbage In, Garbage Out. And if you would like to find other resources for researching online hate, visit The Alan Turing Institute’s Online Hate Research Hub or read The Alan Turing Institute’s Reading List on Online Hate and Abuse Research.
|
||||
|
||||
If you’re looking for a good paper on online hate training datasets (beyond our paper, of course!) then have a look at ‘Resources and benchmark corpora for hate speech detection: a systematic review’ by Poletto et al. in Language Resources and Evaluation.
|
||||
|
||||
## How to contribute
|
||||
|
||||
We accept entries to our catalogue based on pull requests to the content folder. The dataset must be avaliable for download to be included in the list. If you want to add an entry, follow these steps!
|
||||
|
||||
Please send just one dataset addition/edit at a time - edit it in, then save. This will make everyone’s life easier (including yours!)
|
||||
|
||||
- Go to the repo url file and click the "Add file" dropdown and then click on "Create new file".
|
||||

|
||||
|
||||
- In the following page type `content/datasets/<name-of-the-file>.md`. if you want to add an entry to the datasets catalog or `content/keywords/<name-of-the-file>.md` if you want to add an entry to the lists of abusive keywords, if you want to just add an static page you can leave in the root of `content` it will automatically get assigned an url eg: `/content/about.md` becomes the `/about` page
|
||||

|
||||
|
||||
- Copy the contents of `templates/dataset.md` or `templates/keywords.md` respectively to the camp below, filling out the fields with the correct data format
|
||||

|
||||
|
||||
- Click on "Commit changes", on the popup make sure you give some brief detail on the proposed change. and then click on Propose changes
|
||||
<img src='https://i.imgur.com/BxuxKEJ.png' style={{ maxWidth: '50%', margin: '0 auto' }}/>
|
||||
|
||||
- Submit the pull request on the next page when prompted.
|
||||
|
||||
|
||||
@@ -23,7 +23,12 @@ import { serialize } from "next-mdx-remote/serialize";
|
||||
* @returns: { mdxSource: mdxSource, frontMatter: ...}
|
||||
*/
|
||||
const parse = async function (source, format) {
|
||||
const { content, data } = matter(source);
|
||||
const { content, data, excerpt } = matter(source, {
|
||||
excerpt: (file, options) => {
|
||||
// Generate an excerpt for the file
|
||||
file.excerpt = file.content.split("\n\n")[0];
|
||||
},
|
||||
});
|
||||
|
||||
const mdxSource = await serialize(
|
||||
{ value: content, path: format },
|
||||
@@ -51,7 +56,7 @@ const parse = async function (source, format) {
|
||||
[
|
||||
rehypeAutolinkHeadings,
|
||||
{
|
||||
properties: { className: "heading-link" },
|
||||
properties: { className: 'heading-link' },
|
||||
test(element) {
|
||||
return (
|
||||
["h2", "h3", "h4", "h5", "h6"].includes(element.tagName) &&
|
||||
@@ -86,12 +91,14 @@ const parse = async function (source, format) {
|
||||
],
|
||||
format,
|
||||
},
|
||||
scope: data,
|
||||
}
|
||||
);
|
||||
|
||||
return {
|
||||
mdxSource: mdxSource,
|
||||
frontMatter: data,
|
||||
excerpt,
|
||||
};
|
||||
};
|
||||
|
||||
|
||||
Binary file not shown.
21
examples/alan-turing-portal/package-lock.json
generated
21
examples/alan-turing-portal/package-lock.json
generated
@@ -36,8 +36,7 @@
|
||||
"focus-visible": "^5.2.0",
|
||||
"gray-matter": "^4.0.3",
|
||||
"hastscript": "^7.2.0",
|
||||
"mdx-mermaid": "^2.0.0-rc7",
|
||||
"mermaid": "^10.1.0",
|
||||
"mdx-mermaid": "2.0.0-rc7",
|
||||
"next": "13.2.1",
|
||||
"next-mdx-remote": "^4.4.1",
|
||||
"next-router-mock": "^0.9.3",
|
||||
@@ -3339,6 +3338,7 @@
|
||||
"version": "7.0.10",
|
||||
"resolved": "https://registry.npmjs.org/dagre-d3-es/-/dagre-d3-es-7.0.10.tgz",
|
||||
"integrity": "sha512-qTCQmEhcynucuaZgY5/+ti3X/rnszKZhEQH/ZdWdtP1tA/y3VoHJzcVrO9pjjJCNpigfscAtoUB5ONcd2wNn0A==",
|
||||
"peer": true,
|
||||
"dependencies": {
|
||||
"d3": "^7.8.2",
|
||||
"lodash-es": "^4.17.21"
|
||||
@@ -3353,7 +3353,8 @@
|
||||
"node_modules/dayjs": {
|
||||
"version": "1.11.7",
|
||||
"resolved": "https://registry.npmjs.org/dayjs/-/dayjs-1.11.7.tgz",
|
||||
"integrity": "sha512-+Yw9U6YO5TQohxLcIkrXBeY73WP3ejHWVvx8XCk3gxvQDCTEmS48ZrSZCKciI7Bhl/uCMyxYtE9UqRILmFphkQ=="
|
||||
"integrity": "sha512-+Yw9U6YO5TQohxLcIkrXBeY73WP3ejHWVvx8XCk3gxvQDCTEmS48ZrSZCKciI7Bhl/uCMyxYtE9UqRILmFphkQ==",
|
||||
"peer": true
|
||||
},
|
||||
"node_modules/debug": {
|
||||
"version": "4.3.4",
|
||||
@@ -3543,7 +3544,8 @@
|
||||
"node_modules/dompurify": {
|
||||
"version": "2.4.5",
|
||||
"resolved": "https://registry.npmjs.org/dompurify/-/dompurify-2.4.5.tgz",
|
||||
"integrity": "sha512-jggCCd+8Iqp4Tsz0nIvpcb22InKEBrGz5dw3EQJMs8HPJDsKbFIO3STYtAvCfDx26Muevn1MHVI0XxjgFfmiSA=="
|
||||
"integrity": "sha512-jggCCd+8Iqp4Tsz0nIvpcb22InKEBrGz5dw3EQJMs8HPJDsKbFIO3STYtAvCfDx26Muevn1MHVI0XxjgFfmiSA==",
|
||||
"peer": true
|
||||
},
|
||||
"node_modules/dot-prop": {
|
||||
"version": "5.3.0",
|
||||
@@ -7531,6 +7533,7 @@
|
||||
"version": "10.1.0",
|
||||
"resolved": "https://registry.npmjs.org/mermaid/-/mermaid-10.1.0.tgz",
|
||||
"integrity": "sha512-LYekSMNJygI1VnMizAPUddY95hZxOjwZxr7pODczILInO0dhQKuhXeu4sargtnuTwCilSuLS7Uiq/Qn7HTVrmA==",
|
||||
"peer": true,
|
||||
"dependencies": {
|
||||
"@braintree/sanitize-url": "^6.0.0",
|
||||
"@khanacademy/simple-markdown": "^0.8.6",
|
||||
@@ -7555,6 +7558,7 @@
|
||||
"version": "0.8.6",
|
||||
"resolved": "https://registry.npmjs.org/@khanacademy/simple-markdown/-/simple-markdown-0.8.6.tgz",
|
||||
"integrity": "sha512-mAUlR9lchzfqunR89pFvNI51jQKsMpJeWYsYWw0DQcUXczn/T/V6510utgvm7X0N3zN87j1SvuKk8cMbl9IAFw==",
|
||||
"peer": true,
|
||||
"dependencies": {
|
||||
"@types/react": ">=16.0.0"
|
||||
},
|
||||
@@ -15735,6 +15739,7 @@
|
||||
"version": "7.0.10",
|
||||
"resolved": "https://registry.npmjs.org/dagre-d3-es/-/dagre-d3-es-7.0.10.tgz",
|
||||
"integrity": "sha512-qTCQmEhcynucuaZgY5/+ti3X/rnszKZhEQH/ZdWdtP1tA/y3VoHJzcVrO9pjjJCNpigfscAtoUB5ONcd2wNn0A==",
|
||||
"peer": true,
|
||||
"requires": {
|
||||
"d3": "^7.8.2",
|
||||
"lodash-es": "^4.17.21"
|
||||
@@ -15749,7 +15754,8 @@
|
||||
"dayjs": {
|
||||
"version": "1.11.7",
|
||||
"resolved": "https://registry.npmjs.org/dayjs/-/dayjs-1.11.7.tgz",
|
||||
"integrity": "sha512-+Yw9U6YO5TQohxLcIkrXBeY73WP3ejHWVvx8XCk3gxvQDCTEmS48ZrSZCKciI7Bhl/uCMyxYtE9UqRILmFphkQ=="
|
||||
"integrity": "sha512-+Yw9U6YO5TQohxLcIkrXBeY73WP3ejHWVvx8XCk3gxvQDCTEmS48ZrSZCKciI7Bhl/uCMyxYtE9UqRILmFphkQ==",
|
||||
"peer": true
|
||||
},
|
||||
"debug": {
|
||||
"version": "4.3.4",
|
||||
@@ -15888,7 +15894,8 @@
|
||||
"dompurify": {
|
||||
"version": "2.4.5",
|
||||
"resolved": "https://registry.npmjs.org/dompurify/-/dompurify-2.4.5.tgz",
|
||||
"integrity": "sha512-jggCCd+8Iqp4Tsz0nIvpcb22InKEBrGz5dw3EQJMs8HPJDsKbFIO3STYtAvCfDx26Muevn1MHVI0XxjgFfmiSA=="
|
||||
"integrity": "sha512-jggCCd+8Iqp4Tsz0nIvpcb22InKEBrGz5dw3EQJMs8HPJDsKbFIO3STYtAvCfDx26Muevn1MHVI0XxjgFfmiSA==",
|
||||
"peer": true
|
||||
},
|
||||
"dot-prop": {
|
||||
"version": "5.3.0",
|
||||
@@ -18840,6 +18847,7 @@
|
||||
"version": "10.1.0",
|
||||
"resolved": "https://registry.npmjs.org/mermaid/-/mermaid-10.1.0.tgz",
|
||||
"integrity": "sha512-LYekSMNJygI1VnMizAPUddY95hZxOjwZxr7pODczILInO0dhQKuhXeu4sargtnuTwCilSuLS7Uiq/Qn7HTVrmA==",
|
||||
"peer": true,
|
||||
"requires": {
|
||||
"@braintree/sanitize-url": "^6.0.0",
|
||||
"@khanacademy/simple-markdown": "^0.8.6",
|
||||
@@ -18864,6 +18872,7 @@
|
||||
"version": "0.8.6",
|
||||
"resolved": "https://registry.npmjs.org/@khanacademy/simple-markdown/-/simple-markdown-0.8.6.tgz",
|
||||
"integrity": "sha512-mAUlR9lchzfqunR89pFvNI51jQKsMpJeWYsYWw0DQcUXczn/T/V6510utgvm7X0N3zN87j1SvuKk8cMbl9IAFw==",
|
||||
"peer": true,
|
||||
"requires": {
|
||||
"@types/react": ">=16.0.0"
|
||||
}
|
||||
|
||||
@@ -12,46 +12,47 @@
|
||||
},
|
||||
"browserslist": "defaults, not ie <= 11",
|
||||
"dependencies": {
|
||||
"@flowershow/core": "^0.4.10",
|
||||
"@flowershow/markdowndb": "^0.1.1",
|
||||
"@flowershow/remark-callouts": "^1.0.0",
|
||||
"@flowershow/remark-embed": "^1.0.0",
|
||||
"@flowershow/remark-wiki-link": "^1.1.2",
|
||||
"@headlessui/react": "^1.7.13",
|
||||
"@heroicons/react": "^2.0.17",
|
||||
"@mapbox/rehype-prism": "^0.8.0",
|
||||
"@mdx-js/loader": "^2.1.5",
|
||||
"@mdx-js/react": "^2.1.5",
|
||||
"@next/mdx": "^13.0.2",
|
||||
"@opentelemetry/api": "^1.4.0",
|
||||
"@tailwindcss/forms": "^0.5.3",
|
||||
"@tailwindcss/typography": "^0.5.4",
|
||||
"@tanstack/react-table": "^8.8.5",
|
||||
"@types/node": "18.16.0",
|
||||
"@types/react": "18.2.0",
|
||||
"@types/react-dom": "18.2.0",
|
||||
"autoprefixer": "^10.4.12",
|
||||
"clsx": "^1.2.1",
|
||||
"eslint": "8.39.0",
|
||||
"eslint-config-next": "13.3.1",
|
||||
"fast-glob": "^3.2.11",
|
||||
"feed": "^4.2.2",
|
||||
"flexsearch": "^0.7.31",
|
||||
"focus-visible": "^5.2.0",
|
||||
"gray-matter": "^4.0.3",
|
||||
"hastscript": "^7.2.0",
|
||||
"mdx-mermaid": "^2.0.0-rc7",
|
||||
"mermaid": "^10.1.0",
|
||||
"next": "13.2.1",
|
||||
"next-mdx-remote": "^4.4.1",
|
||||
"next-router-mock": "^0.9.3",
|
||||
"next-superjson-plugin": "^0.5.7",
|
||||
"papaparse": "^5.4.1",
|
||||
"postcss-focus-visible": "^6.0.4",
|
||||
"react": "18.2.0",
|
||||
"react-dom": "18.2.0",
|
||||
"react-hook-form": "^7.43.9",
|
||||
"react-markdown": "^8.0.7",
|
||||
"superjson": "^1.12.3",
|
||||
"tailwindcss": "^3.3.0",
|
||||
"@flowershow/core": "^0.4.10",
|
||||
"@flowershow/remark-callouts": "^1.0.0",
|
||||
"@flowershow/remark-embed": "^1.0.0",
|
||||
"@flowershow/remark-wiki-link": "^1.1.2",
|
||||
"@heroicons/react": "^2.0.17",
|
||||
"@opentelemetry/api": "^1.4.0",
|
||||
"@tanstack/react-table": "^8.8.5",
|
||||
"@types/node": "18.16.0",
|
||||
"@types/react": "18.2.0",
|
||||
"@types/react-dom": "18.2.0",
|
||||
"eslint": "8.39.0",
|
||||
"eslint-config-next": "13.3.1",
|
||||
"gray-matter": "^4.0.3",
|
||||
"hastscript": "^7.2.0",
|
||||
"mdx-mermaid": "2.0.0-rc7",
|
||||
"next": "13.2.1",
|
||||
"next-mdx-remote": "^4.4.1",
|
||||
"papaparse": "^5.4.1",
|
||||
"react": "18.2.0",
|
||||
"react-dom": "18.2.0",
|
||||
"react-vega": "^7.6.0",
|
||||
"rehype-autolink-headings": "^6.1.1",
|
||||
"rehype-katex": "^6.0.3",
|
||||
@@ -60,9 +61,7 @@
|
||||
"remark-gfm": "^3.0.1",
|
||||
"remark-math": "^5.1.1",
|
||||
"remark-smartypants": "^2.0.0",
|
||||
"remark-toc": "^8.0.1",
|
||||
"superjson": "^1.12.3",
|
||||
"tailwindcss": "^3.3.0"
|
||||
"remark-toc": "^8.0.1"
|
||||
},
|
||||
"devDependencies": {
|
||||
"eslint": "8.26.0",
|
||||
|
||||
@@ -1,12 +1,9 @@
|
||||
import { Container } from '../components/Container'
|
||||
import clientPromise from '../lib/mddb'
|
||||
import { promises as fs } from 'fs';
|
||||
import fs from 'fs'
|
||||
import { MDXRemote } from 'next-mdx-remote'
|
||||
import { serialize } from 'next-mdx-remote/serialize'
|
||||
import { Card } from '../components/Card'
|
||||
import Head from 'next/head'
|
||||
import parse from '../lib/markdown'
|
||||
import { Mermaid } from '@flowershow/core';
|
||||
|
||||
export const getStaticProps = async ({ params }) => {
|
||||
const urlPath = params.slug ? params.slug.join('/') : ''
|
||||
@@ -14,8 +11,8 @@ export const getStaticProps = async ({ params }) => {
|
||||
const mddb = await clientPromise
|
||||
const dbFile = await mddb.getFileByUrl(urlPath)
|
||||
|
||||
const source = await fs.readFile(dbFile.file_path,'utf-8')
|
||||
let mdxSource = await parse(source, '.mdx')
|
||||
const source = fs.readFileSync(dbFile.file_path, { encoding: 'utf-8' })
|
||||
const mdxSource = await serialize(source, { parseFrontmatter: true })
|
||||
|
||||
return {
|
||||
props: {
|
||||
@@ -76,15 +73,12 @@ const Meta = ({keyValuePairs}) => {
|
||||
}
|
||||
|
||||
export default function DRDPage({ mdxSource }) {
|
||||
const meta = mdxSource.frontMatter
|
||||
const meta = mdxSource.frontmatter
|
||||
const keyValuePairs = Object.entries(meta).filter(
|
||||
(entry) => entry[0] !== 'title'
|
||||
)
|
||||
return (
|
||||
<>
|
||||
<Head>
|
||||
<title>{meta.title}</title>
|
||||
</Head>
|
||||
<Container className="mt-16 lg:mt-32">
|
||||
<article>
|
||||
<header className="flex flex-col">
|
||||
@@ -96,7 +90,7 @@ export default function DRDPage({ mdxSource }) {
|
||||
</Card>
|
||||
</header>
|
||||
<div className="prose dark:prose-invert">
|
||||
<MDXRemote {...mdxSource.mdxSource} components={{mermaid: Mermaid}} />
|
||||
<MDXRemote {...mdxSource} />
|
||||
</div>
|
||||
</article>
|
||||
</Container>
|
||||
|
||||
@@ -4,6 +4,7 @@ import fs from 'fs'
|
||||
import { Card } from '../components/Card'
|
||||
import { Container } from '../components/Container'
|
||||
import clientPromise from '../lib/mddb'
|
||||
import ReactMarkdown from 'react-markdown'
|
||||
import { Index } from 'flexsearch'
|
||||
import { useForm } from 'react-hook-form'
|
||||
import Link from 'next/link'
|
||||
@@ -108,6 +109,7 @@ export default function Home({
|
||||
datasets,
|
||||
indexText,
|
||||
listsOfKeywords,
|
||||
contributingText,
|
||||
availableLanguages,
|
||||
availablePlatforms,
|
||||
}) {
|
||||
@@ -139,7 +141,7 @@ export default function Home({
|
||||
<h1 className="text-4xl font-bold tracking-tight text-zinc-800 dark:text-zinc-100 sm:text-5xl">
|
||||
{indexText.frontmatter.title}
|
||||
</h1>
|
||||
<article className="mt-6 index-text flex flex-col gap-y-2 text-base text-zinc-600 dark:text-zinc-400 prose dark:prose-invert">
|
||||
<article className="mt-6 flex flex-col gap-y-2 text-base text-zinc-600 dark:text-zinc-400">
|
||||
<MDXRemote {...indexText} />
|
||||
</article>
|
||||
</div>
|
||||
@@ -234,6 +236,14 @@ export default function Home({
|
||||
))}
|
||||
</div>
|
||||
</Container>
|
||||
<Container className="mt-16">
|
||||
<h2 className="text-xl font-bold tracking-tight text-zinc-800 dark:text-zinc-100 sm:text-5xl">
|
||||
How to contribute
|
||||
</h2>
|
||||
<article className="mt-6 flex flex-col gap-y-8 text-base text-zinc-600 dark:text-zinc-400 contributing">
|
||||
<MDXRemote {...contributingText} />
|
||||
</article>
|
||||
</Container>
|
||||
</>
|
||||
)
|
||||
}
|
||||
@@ -260,7 +270,12 @@ export async function getStaticProps() {
|
||||
}))
|
||||
|
||||
const index = await mddb.getFileByUrl('/')
|
||||
const contributing = await mddb.getFileByUrl('contributing')
|
||||
let indexSource = fs.readFileSync(index.file_path, { encoding: 'utf-8' })
|
||||
let contributingSource = fs.readFileSync(contributing.file_path, {
|
||||
encoding: 'utf-8',
|
||||
})
|
||||
contributingSource = await serialize(contributingSource, { parseFrontmatter: true })
|
||||
indexSource = await serialize(indexSource, { parseFrontmatter: true })
|
||||
|
||||
const availableLanguages = [
|
||||
@@ -274,6 +289,7 @@ export async function getStaticProps() {
|
||||
datasets,
|
||||
listsOfKeywords,
|
||||
indexText: indexSource,
|
||||
contributingText: contributingSource,
|
||||
availableLanguages,
|
||||
availablePlatforms,
|
||||
},
|
||||
|
||||
@@ -3,11 +3,6 @@
|
||||
@import './prism.css';
|
||||
@import 'tailwindcss/utilities';
|
||||
|
||||
.index-text ul,
|
||||
.index-text p {
|
||||
margin: 0;
|
||||
}
|
||||
|
||||
.index-text h2 {
|
||||
margin-top: 1rem;
|
||||
.contributing li {
|
||||
margin-bottom: 1.75rem;
|
||||
}
|
||||
|
||||
@@ -6,8 +6,7 @@
|
||||
"dev": "next dev",
|
||||
"build": "next build",
|
||||
"start": "next start",
|
||||
"lint": "next lint",
|
||||
"export": "npm run build && next export -o out"
|
||||
"lint": "next lint"
|
||||
},
|
||||
"dependencies": {
|
||||
"@flowershow/core": "^0.4.10",
|
||||
|
||||
@@ -116,57 +116,3 @@ From the browser, access http://localhost:3000. You should see the following:
|
||||
|
||||
<img src="/assets/docs/datasets-index-page.png" />
|
||||
|
||||
## Deploying your PortalJS app
|
||||
|
||||
Finally, let's learn how to deploy PortalJS apps to Vercel or Cloudflare Pages.
|
||||
|
||||
> [!tip]
|
||||
> Although we are using Vercel and Cloudflare Pages in this tutorial, you can deploy apps in any hosting solution you want as a static website by running `npm run export` and distributing the contents of the `out/` folder.
|
||||
|
||||
### Push to a GitHub repo
|
||||
|
||||
The PortalJS app we built up to this point is stored locally. To allow Vercel or Cloudflare Pages to deploy it, we have to push it to GitHub (or another SCM supported by these hosting solutions).
|
||||
|
||||
- Create a new repository under your GitHub account
|
||||
- Add the new remote origin to your PortalJS app
|
||||
- Push the app to the repository
|
||||
|
||||
If you are not sure about how to do it, follow this guide: https://nextjs.org/learn/basics/deploying-nextjs-app/github
|
||||
|
||||
> [!tip]
|
||||
> You can also deploy using our Vercel deploy button. In this case, a new repository will be created under your GitHub account automatically.
|
||||
> [Click here](#one-click-deploy) to scroll to the deploy button.
|
||||
|
||||
### Deploy to Vercel
|
||||
|
||||
The easiest way to deploy a PortalJS app is to use Vercel, a serverless platform for static and hybrid applications developed by the creators of Next.js.
|
||||
|
||||
To deploy your PortalJS app:
|
||||
|
||||
- Create a Vercel account by going to https://vercel.com/signup and choosing "Continue with GitHub"
|
||||
- Import the repository you created for the PortalJS app at https://vercel.com/new
|
||||
- During the setup process you can use the default settings - no need to change anything.
|
||||
|
||||
When you deploy, your PortalJS app will start building. It should finish in under a minute.
|
||||
|
||||
When it’s done, you’ll get deployment URLs. Click on one of the URLs and you should see your PortaJS app live.
|
||||
|
||||
>[!tip]
|
||||
> You can find a more in-depth explanation about this process at https://nextjs.org/learn/basics/deploying-nextjs-app/deploy
|
||||
|
||||
#### One-Click Deploy
|
||||
|
||||
You can instantly deploy our example app to your Vercel account by clicking the button below:
|
||||
|
||||
[](https://vercel.com/new/clone?repository-url=https%3A%2F%2Fgithub.com%2Fdatopian%2Fportaljs%2Ftree%2Fmain%2Fexamples%2Flearn-example&project-name=my-data-portal&repository-name=my-data-portal&demo-title=PortalJS%20Learn%20Example&demo-description=PortalJS%20Learn%20Example%20-%20https%3A%2F%2Fportaljs.org%2Fdocs&demo-url=learn-example.portaljs.org&demo-image=https%3A%2F%2Fportaljs.org%2Fassets%2Fexamples%2Fbasic-example.png)
|
||||
|
||||
This will create a new repository on your GitHub account and deploy it to Vercel. If you are following the tutorial, you can replicate the changes done on your local app to this new repository.
|
||||
|
||||
### Deploy to Cloudflare Pages
|
||||
|
||||
To deploy your PortalJS app to Cloudflare Pages, follow this guide:
|
||||
|
||||
https://developers.cloudflare.com/pages/framework-guides/deploy-a-nextjs-site/#deploy-with-cloudflare-pages-1
|
||||
|
||||
Note that you don't have to change anything - just follow the steps, choosing the repository you created.
|
||||
|
||||
|
||||
Reference in New Issue
Block a user