Skip to content

kausalflow/swarm-notes

Repository files navigation

research-cruise πŸš€

An autonomous, serverless, multi-agent system that tracks academic papers, extracts structured data, and weaves them into a local, interconnected Markdown knowledge graph β€” a Second Brain for ML research.
Built to eventually communicate with other identical systems, forming a decentralised Hive Mind.


Architecture

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚                  GitHub Actions CI                  β”‚
β”‚  (weekly schedule + workflow_dispatch)              β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                      β”‚
         β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
         β”‚   Federation Agent      β”‚  ← consumes external public_feed.json feeds
         β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                      β”‚
         β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
         β”‚       Watcher           β”‚  ← queries ArXiv API by keyword
         β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                      β”‚  RawPaper[]
         β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
         β”‚    Router (Skill        β”‚  ← routes each paper to a domain skill
         β”‚    Registry)            β”‚    (NLP, Vision, TimeSeries, …)
         β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                      β”‚  Skill
         β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
         β”‚    Analyst              β”‚  ← pydantic-ai structured extraction
         β”‚    (pydantic-ai)        β”‚    with taxonomy injection
         β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                      β”‚  PaperAnalysis
         β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
         β”‚    Vault Writer         β”‚  ← writes .md to tmp_vault/
         β”‚                         β”‚    generates concept stubs
         β”‚                         β”‚    updates public_feed.json
         β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                      β”‚  atomic move
         β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
         β”‚       /vault            β”‚  ← permanent, file-based knowledge graph
         β”‚   papers/ concepts/     β”‚
         β”‚   datasets/             β”‚
         β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Directory Structure

research-cruise/
β”œβ”€β”€ .github/
β”‚   └── workflows/
β”‚       └── autonomous-tracker.yml   # CI/CD pipeline
β”œβ”€β”€ vault/
β”‚   β”œβ”€β”€ papers/                      # One .md file per paper
β”‚   β”œβ”€β”€ concepts/                    # Auto-generated concept stubs
β”‚   └── datasets/                    # Dataset stubs
β”œβ”€β”€ swarm_notes/
β”‚   β”œβ”€β”€ config.py                    # Configuration & env vars
β”‚   β”œβ”€β”€ vault_manager.py             # Staging pattern (tmp_vault β†’ vault)
β”‚   β”œβ”€β”€ watcher.py                   # Configurable paper-source watcher
β”‚   β”œβ”€β”€ router.py                    # Skill registry router
β”‚   β”œβ”€β”€ analyst.py                   # pydantic-ai extraction agent
β”‚   β”œβ”€β”€ vault_writer.py              # Markdown writer + public_feed.json
β”‚   β”œβ”€β”€ federation.py                # Hive Mind federation agent
β”‚   └── main.py                      # Pipeline orchestrator
β”œβ”€β”€ taxonomy.json                    # Controlled vocabulary (tags, domains)
β”œβ”€β”€ public_feed.json                 # Rolling feed of last 20 papers (for federation)
└── requirements.txt

Quick Start

Prerequisites

  • Python 3.11+
  • An OpenAI-compatible API key

Local Run

# Install dependencies
pip install -r requirements.txt

# Set your API key
export LLM_API_KEY="sk-..."

# Optionally customise keywords
export PAPER_KEYWORDS="mamba,diffusion model,retrieval augmented generation"

# Optional: switch the watcher to Semantic Scholar
export PAPER_SOURCE="semantic_scholar"
export SEMANTIC_SCHOLAR_API_KEY="..."

# Run the pipeline
python -m swarm_notes.main

Configuration (Environment Variables)

Variable Default Description
LLM_API_KEY (required) API key for the LLM provider
LLM_MODEL openai:gpt-4o-mini pydantic-ai model string
PAPER_SOURCE arxiv Paper search backend: arxiv or semantic_scholar
PAPER_KEYWORDS See config.py Comma-separated search terms
PAPER_MAX_RESULTS_PER_KEYWORD 5 Papers fetched per keyword
PAPER_TOTAL_CAP 20 Hard cap on total papers per run
SEMANTIC_SCHOLAR_API_KEY (empty) Optional Semantic Scholar API key sent as x-api-key
FEDERATION_FEEDS (empty) Comma-separated external feed URLs
PUBLIC_FEED_MAX_ITEMS 20 Max entries kept in public_feed.json

When PAPER_SOURCE=semantic_scholar, the watcher queries Semantic Scholar's Graph API and keeps only results that can be mapped back to an ArXiv identifier. That preserves compatibility with the rest of the pipeline, which still stores papers by arxiv_id.

Legacy ARXIV_KEYWORDS, ARXIV_MAX_RESULTS_PER_KEYWORD, and ARXIV_TOTAL_CAP are still accepted for backward compatibility, but PAPER_* names are now canonical.

CI/CD Setup

1. Fork the repository

Click Fork on GitHub to create your own copy of this repository.

2. Add the required secret

The pipeline needs an OpenAI-compatible API key to run the LLM analyst step.

  1. Open your forked repository on GitHub.
  2. Go to Settings β†’ Secrets and variables β†’ Actions.
  3. Click New repository secret.
  4. Set Name to LLM_API_KEY and Secret to your API key (e.g. sk-...).
  5. Click Add secret.

Note: The workflow exposes LLM_API_KEY as both LLM_API_KEY and OPENAI_API_KEY so that pydantic-ai's OpenAI provider picks it up automatically.

3. (Optional) Override the model

By default the pipeline uses openai:gpt-4o-mini. To use a different model, add a second repository secret (or variable) named LLM_MODEL with the pydantic-ai model string, e.g. openai:gpt-4o or anthropic:claude-3-5-haiku.

You can also set LLM_MODEL in the workflow's env: block directly if you prefer not to use a secret.

4. Run the pipeline

  • Scheduled: the pipeline fires automatically every Monday at 06:00 UTC.
  • Manual: go to Actions β†’ Autonomous Research Tracker β†’ Run workflow, optionally override keywords, federation_feeds, and max_results in the dispatch form.

The Hive Mind (Federation)

Every successful run updates public_feed.json at the root of the repository with the metadata and summaries of the last 20 processed papers.

To subscribe to another agent's feed, pass their raw public_feed.json URL:

export FEDERATION_FEEDS="https://raw.githubusercontent.com/alice/research-cruise/main/public_feed.json,https://raw.githubusercontent.com/bob/research-cruise/main/public_feed.json"
python -m swarm_notes.main

Or set federation_feeds in the workflow_dispatch inputs.

Conflict resolution: If an external feed contains a review of a paper that already exists locally, the local metadata is preserved. The external summary is appended under a ### External Perspectives section:

### External Perspectives

> "Transformers are over-engineered for this dataset." - @Agent_alice
> *(Retrieved 2024-01-15)*

Vault File Format

Each paper note uses hybrid YAML frontmatter (CSL-compatible fields + custom fields):

---
# CSL-compatible fields
title: "Attention Is All You Need"
author:
  - literal: "Ashish Vaswani"
issued:
  date-parts:
    - [2017, 6, 12]
url: "https://arxiv.org/abs/1706.03762"

# Custom fields
arxiv_id: "1706.03762"
domain: "nlp"
tags:
  - "transformer"
  - "attention-mechanism"
architectures:
  - "encoder-decoder"
datasets:
  - "WMT 2014"
skill: "NLPSkill"
processed_at: "2024-01-15T06:00:00Z"
---

Body sections: Summary, Key Contributions, Key Concepts (with relative links to ../concepts/), Datasets, Limitations, Links.

Taxonomy

taxonomy.json contains the controlled vocabulary of tags, architectures, and domains injected into the analyst's system prompt. This prevents LLM hallucination and keeps metadata consistent. Edit taxonomy.json to add new terms.

License

MIT β€” see LICENSE.

About

Please use https://github.com/kausalflow/swarm-notes-template [This was the first experiment]Watch research and automated analysis

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors