This repository is the source code for the podnebnik.org website.
The project is structured as a multi page, statically generated web site that allows authors to create rich narratives by combining data, text and visualizations. By combining data, content and code in one repository it is possible to keep all three components in sync, have a single source of truth, and a complete history of changes.
The project resources are organized in following top level folders:
datafolder contains the data in the form of Frictionless Data packagescodefolder contains the code for the visualizationspagesfolder contains the text content of the websitestylesfolder contains the CSS stylesheetsassetsfolder contains the static assets, such as images and fontsdeploymentfolder contains the deployment definitions such as Dockerfiles
The project is build on top of the following fantastic tools:
- 11ty static site generator
- Highcharts charting library
- Frictionless Data data packaging and validation
- Datasette SQLite database viewer
- Fable F# to JavaScript compiler
- TypeScript typed JavaScript for enhanced developer experience
- Tailwind CSS utility-first CSS framework
- Solid JS a reactive JavaScript library
For editing content or data packages, the simplest way to develop is using docker. For this you will only need to have docker and docker-compose installed. Then run:
docker-compose -f compose.yaml build
docker-compose -f compose.yaml up
This will build website and datasette in a similar way as in production. The website will appear on http://127.0.0.1:8003/ and datasette will appear on http://127.0.0.1:8001/ If these ports clash with other things you might have running on your system, you can change the ports in compose.yaml to something else.
If you are editing content on website you can just edit the files and the webpage will autorefresh on save.
If you are developing data packages and want to re-import data into datasette, you need to build the datasette image again:
docker-compose -f compose.yaml build datasette
Depending on whether you want to author data, text or visualizations, you will need to install different tools. However, the basic setup is the same for all three.
NOTE If you are using the VS Code editor, you can use the Remote Containers extension and the provided development container to develop in a Docker container. This will ensure that you have all the necessary tools installed and configured automatically. You can find the configuration for the container in the
.devcontainerfolder. In this case you can skip the rest of this section (up to the NOTES) and start developing right away.
To start developing you need to have the following requirements on:
nodehttps://nodejs.org/en/ withcorepackenabledyarnhttps://yarnpkg.com.NET 10.0https://dotnet.microsoft.com/en-us/downloadpython 3.12https://www.python.org/uvhttps://docs.astral.sh/uv/
Use uv to install and manage the python version(s) on your system. Once you have uv installed, it will automatically use it to install the correct python version packages.
Next, install the JavaScript dependencies:
yarn install
You can now start the development server:
yarn run start
and point your browser to:
http://127.0.0.1:8080/
NOTE: The projects requirements may change over time. To keep your development environment up to date, run
yarn installfrom time to time. Also, in case of major changes to the JavaScript dependencies, you may need to runyarn install --forceto force the installation of the new dependencies.
NOTE: The development server is configured to watch for changes in the
code,pagesandstylesfolders. If you make changes to any of these folders, the server will automatically rebuild the site and reload the browser. However, there may be cases where the server does not detect the changes. In that case, you can force the server to rebuild the site by pressingCtrl + Cand then runyarn startagain. In some cases it may help to runyarn cleanbefore runningyarn startagain. If you want to emulate production setup locally, exportELEVENTY_EMULATE_PRODUCTION=1.
To import datapackages into datasette, run:
uv run invoke create-databases
To then start datasette, run:
uv run invoke datasette
A datapackage is a combination of data resources (.csv data files) and a datapackage descriptor file (.yaml) containing the metadata. Check the existing data packages for an example how to write a descriptor file.
Data files should be in CSV format, as this is what our current system knows how to import into datasette.
After creating CSV files and writing a package description, validate it, to check the metadata matches the data:
uv run frictionless validate data/package/description.yaml
You can also check old data repository for more hints.
The content is a collection of HTML and Markdown files in the pages folder. The URLs on the web page are derived from the file paths. For example, the file pages/objave/energetika.md will be available at https://podnebnik.org/objave/energetika/.
Probably the easiest way to start authoring content is to look at the existing pages in the pages folder to see how things are organized and copy an existing page as your starting point. This should in most cases be enough to get you started. However, for more details on how to organize and manipulate the content please look at the 11ty documentation.
NOTE: 11ty supports a number of templating languages. However, to keep markup consistent, the recommended templating language in this project is Liquid JS.
This project now includes comprehensive TypeScript support alongside JavaScript. You can:
- Use TypeScript for new components and utilities (
.ts,.tsxfiles) - Keep JavaScript for existing code (
.js,.jsxfiles continue working) - Mix both - TypeScript and JavaScript files work together seamlessly
Key Benefits:
- Enhanced IDE support with autocomplete and error detection
- Type safety for API responses and component props
- Better refactoring support and code documentation
- Gradual adoption - no need to convert everything at once
Getting Started:
- New components: Use
.tsxextension for SolidJS components with TypeScript - Utilities: Use
.tsextension for helper functions and data processing - Examples: Check
code/examples/types-example/for comprehensive patterns - Types: Import shared types from
code/types/directory
Resources:
- Examples:
code/examples/types-example/ - Project types:
code/types/ - Performance analysis:
docs/typescript-performance-analysis.md
Technically and for the purpose of this project, a visualization is a JavaScript function that renders the content of the visualization in the provided DOM element. You can in principle use any JavaScript library or language that compiles to JavaScript. However, we recommend using Fable and/or JavaScript together with Solid JS as we have found these to be the most productive and performant for most cases. See code/examples for examples of visualizations.
This is an example how to include the visualization in the page (the visualization uses Solid JS):
<div class="chart" id="my-chart">
<script type="module">
import { render } from 'solid-js/web'
import Chart from '/code/examples/javascript.highcharts/chart.jsx'
render(() => Chart({kind: 'line'})), document.getElementById('my-chart')
</script>
</div>If you want to render your visualization lazily (recommended), you can use the provided lazy wrapper which will render the visualization only when the user scrolls the document to that visualization:
<div class="chart" id="my-chart">
<script type="module">
import Lazy from '/code/lazy.jsx'
import { render } from 'solid-js/web'
import Chart from '/code/examples/javascript.highcharts/chart.jsx'
render(() => Lazy(() => Chart({kind: 'line'})), document.getElementById('my-chart'))
</script>
</div>Visualizations will usually need to load some data. The two main cases will be:
- Data is small enough to be loaded at once and embedded directly into the visualization.
- Data is large and needs to be loaded asynchronously (based on some query) via an API.
For the first case we suggest to load the data directly from the data folder. This will also ensure that the data is always available and that the visualization will not break even if the API is down.
TODO: provide the infrastructure and examples of how to load data from the
datafolder.
For the second case we provide a Datasette API that serves all the data in this repository.
TODO: provide the infrastructure and examples of how to load data from the Datasette API.