Graphon is a Python graph execution engine for agentic AI workflows.
The repository is still evolving, but it already contains a working execution engine, built-in workflow nodes, model runtime abstractions, integration protocols, and a runnable end-to-end example.
- Queue-based
GraphEngineorchestration with event-driven execution - Graph parsing, validation, and fluent graph building
- Shared runtime state, variable pool, and workflow execution domain models
- Built-in node implementations for common workflow patterns
- DSL import support with Slim-backed LLM nodes
- HTTP, file, tool, and human-input integration protocols
- Extensible engine layers and external command channels
Repository modules currently cover node types such as start, end, answer,
llm, if-else, code, template-transform, question-classifier,
http-request, tool, variable-aggregator, variable-assigner, loop,
iteration, parameter-extractor, document-extractor, list-operator, and
human-input.
Graphon is currently easiest to evaluate from a source checkout.
- Python 3.12 or 3.13
uvmake
Python 3.14 is currently unsupported because unstructured, which backs part
of the document extraction stack, currently declares Requires-Python: <3.14.
make dev
source .venv/bin/activate
make testmake dev installs the project, syncs development dependencies, and sets up
prek Git hooks. make test is the progressive
local validation entrypoint: it formats, applies lint fixes, runs ty check,
and then runs pytest.
The repository includes minimal runnable Slim LLM examples at
examples/slim_llm.
Both versions execute this workflow:
start -> llm -> answer
To run it:
make dev
source .venv/bin/activate
cd examples/slim_llm
cp credentials.example.json credentials.json
python3 dsl.py "Reply with only the word Graphon."
python3 code.py "Reply with only the word Graphon."Before running the example, fill in the required values in credentials.json.
The example currently expects:
- OpenAI-compatible model credentials in
model_credentials slim.modeset to eitherlocalorremotedify-plugin-daemon-sliminPATH,SLIM_BINARY_PATH, or a localslimbinary in the example directory- for remote mode,
daemon_addranddaemon_key
For the exact credential shape and runtime notes, see examples/slim_llm/README.md.
At a high level, direct Graphon usage looks like this:
- Build or load a graph and instantiate nodes into a
Graph. - Prepare
GraphRuntimeStateand seed theVariablePool. - Configure model, file, HTTP, tool, or human-input adapters as needed.
- Run
GraphEngineand consume emitted graph events. - Read final outputs from runtime state.
For Dify DSL documents, use graphon.dsl.loads() to build the engine from the
workflow YAML and credentials. The resulting engine uses the DSL Slim adapter
for LLM nodes:
engine = loads(
dsl,
credentials=credentials,
workflow_id="example-dsl-openai-slim",
start_inputs={"query": query},
)
events = list(engine.run())See examples/slim_llm/dsl.py for the DSL import version and examples/slim_llm/code.py for the Python graph construction version.
src/graphon/graph: graph structures, parsing, validation, and builderssrc/graphon/graph_engine: orchestration, workers, command channels, and layerssrc/graphon/runtime: runtime state, read-only wrappers, and variable poolsrc/graphon/nodes: built-in workflow node implementationssrc/graphon/model_runtime: provider/model abstractions and shared model entitiessrc/graphon/dsl: DSL import support, including Slim-backed runtime adapterssrc/graphon/graph_events: event models emitted during executionsrc/graphon/http: HTTP client abstractions and default implementationsrc/graphon/file: workflow file models and file runtime helperssrc/graphon/protocols: public protocol re-exports for integrationsexamples/: runnable examplestests/: unit and integration-style coverage
- CONTRIBUTING.md: contributor workflow, CI, commit/PR rules
- examples/slim_llm/README.md: runnable Slim LLM example setup
- src/graphon/model_runtime/README.md: model runtime overview
- src/graphon/graph_engine/layers/README.md: engine layer extension points
- src/graphon/graph_engine/command_channels/README.md: local and distributed command channels
Contributor setup, tooling details, CLA notes, and commit/PR conventions live in CONTRIBUTING.md.
CI currently validates pull request titles, runs make check including
uv.lock freshness validation, and runs uv run pytest on Python 3.12 and
3.13. Python 3.14 is currently excluded because unstructured does not yet
support it.
Apache-2.0. See LICENSE.