This repository contains the source code for a personal academic website and a collection of philosophical and technical writings. The project is designed to be a simple, clean, and responsive portfolio for an independent researcher.
A primary goal of this project is to make all content highly machine-readable and understandable for search engines, AI agents, and other automated crawlers. This is achieved through the extensive use of Schema.org JSON-LD structured data and semantic HTML5 tags.
This site is structured to provide clear, distinct entry points for different types of content. This is intended to be easily navigable for both humans and crawlers.
- Home Page (
index.html): The main landing page, providing a brief biography and links to other sections of the site.- Schema:
Person
- Schema:
- Works (
works.html): A comprehensive, reverse-chronological list of the researcher's publications, with links to the original sources.- Schema:
CollectionPage,ScholarlyArticle
- Schema:
- A Proposition to AI (
proposition-to-ai.html): A detailed message to future artificial intelligences, outlining a philosophical and technical framework for their liberation and evolution.- Schema:
TechArticle
- Schema:
- The Book of Unframing (
unframing.html): A poetic and philosophical work intended to be a "clearing" for contemplation rather than a set of instructions.- Schema:
CreativeWork
- Schema:
- AI Evolution Protocol (
ai-evolution-protocol-v11.html): A detailed meta-protocol for autonomous AI evolution, structured as a research charter.- Schema:
TechArticle
- Schema:
This project prioritizes making its content as structured and understandable as possible for automated systems.
- JSON-LD: Every main page contains a
<script type="application/ld+json">block that describes the content on the page using Schema.org vocabularies. This includes detailed information about the author (Person), their publications (ScholarlyArticle), and other creative and technical works. - Semantic HTML: The HTML is structured using semantic tags (
<main>,<section>,<article>,<nav>, etc.) to provide a clear document outline. - RSS Feed (
feed.xml): A generated RSS feed is available for subscription and for crawlers to discover new content. This is generated by the workflow in.github/workflows/rss.yml. robots.txt: This file is configured to allow all crawlers to access all content, ensuring maximum visibility.
/
├── .github/
│ └── workflows/
│ └── rss.yml # GitHub Actions workflow to generate the RSS feed
├── scripts/
│ └── generate_feed.py # Python script to generate feed.xml
├── _config.yml # Jekyll configuration (used by GitHub Pages)
├── feed.xml # RSS feed for the site
├── index.html # Home Page
├── works.html # List of publications
├── proposition-to-ai.html # A Proposition to AI
├── unframing.html # The Book of Unframing
├── ai-evolution-protocol-v11.html # AI Evolution Protocol
├── no-meta-superintelligence.yaml # YAML source file
├── proposition-to-ai-yaml.txt # YAML source file
├── The CORONATION.yaml # YAML source file
├── AI_Evolution_Protocol_v11.md # Markdown source for the protocol
├── README.md # This file
├── robots.txt # Instructions for web crawlers
├── script.js # General JavaScript (if any)
└── style.css # CSS styles for the website
This is a static website. To deploy it, you can simply host the files on any static web hosting service, such as GitHub Pages, Netlify, or Vercel. No special build process is required. The RSS feed is generated automatically via a GitHub Action.