CLASS.METHOD

Below we compare Typed Repair with Gradient Descent under a shared, fully explicit evaluation protocol.

Note. For a more expository account of the motivation and interpretation, see the discussion at the bottom of the page.

Both are implemented in JavaScript and both are judged using the same locked prediction rule $\arg\max\nolimits^{*}$, but the aim is analytical rather than competitive. We strip away what is incidental to semantics (stochasticity, ambiguous preprocessing, shifting criteria) and retain only a small, replayable object: a finite interpretation with a deterministic readout.

With the extensional interface fixed, we vary only the intensional update rule and run two traces over the same scan schedule:

(i) We fix the witness table, the feature schema (bias, unary facts), the number of classes, and a deterministic scan order. Each cycle therefore has an unambiguous definition and can be replayed exactly.

(ii) Typed Repair performs refutation-guided repairs on a sparse integer weight table $\mathsf{W}$ and propagates consequences through postings into a cached score matrix $\mathsf{S}$. Gradient Descent performs SGD updates on softmax cross-entropy with floating weights, while prediction is still read out by the same locked $\arg\max\nolimits^{*}$.

(iii) We report only what the traces themselves make checkable. The Typed Repair trace exposes additional decidable invariants $$\mathsf{WF},\mathsf{CACHE.ok},\mathsf{FULL.ok}$$ and produces concrete witnesses when they fail. The GD trace reports loss and accuracy computed under the same locked readout.

Result. A side-by-side view of state evolution under two update rules: which properties are maintained, how violations are witnessed, and what can be verified directly from the explicit finite execution.

We start with linearly separable datasets, e.g. $\mathrm{NAND/OR/AND}$ where finite-table margin certificate is typically reachable. Alternatively, we can examine “harder” cases, some of which are not separable in this setting, e.g. $\textrm{XOR}$ or $\textrm{Parity-of-3}$.

Key Materials:

  • Downloadable EMNIST Jupyter Notebook; GitHub
  • Draft Paper; .pdf
Relvant Materials:
  • The Fractal Logic of 'Phi'-adic Recursion; arXiv
Secondary Literature:
  • Kleene: Introduction to Metamathematics; DOI
  • Troelstra, van Dalen: Constructivism in Mathematics An Introduction; Publisher
  • On Stochastic Gradient Descent; Wikipedia
  • On Type System; Wikipedia

Step 1. We select a preset:

Display (editable):

Note. Inputs are separated by spaces, then “|”, then class label. Example: 0 1|1.


Step 2. We choose the run budget and update rule parameters.

Shared budget (cycles C): .

Seed:

scan order:

For Typed Repair, Margin $\gamma$: , Overshoot $\Delta$: .

For Gradient Descent, Learning rate $\eta$: .


Step 3. We generate the two traces.

Build status:
Not built.

The “Semantics Lock” is shared and records what is fixed: $\textsf{K}$ classes, $\textsf{N}$ examples, $\textsf{F}$ features, prediction, $\arg\max\nolimits^{*}$, and the scan schedule. We hold the extensional interface fixed and vary only the intensional update rule.

Semantics Lock:


Step 4. We analyze the results. Both traces run a scan schedule through the finite table for the requested number of epochs. A “cycle” means:

  • for Typed Repair: one witness-driven repair update that changes $\textsf{W}$ and propagates cached $\textsf{S}$ via postings.
  • for Gradient Descent: one SGD update on a single example (one gradient step).

Typed Repair (integer, refutation-guided, cached)

State:

WF: ? FULL.ok: ? CACHE.ok: ?

Cycle selection:

Certificate:

Last event:

Event log:

Interpretation and logic:

Gradient Descent (softmax cross-entropy, SGD)

State:

acc: ? loss: ?

Cycle selection

Snapshot:

Last event:

Event log:

Interpretation:

On Typed Repair.

We present Typed Repair as a λ-definable learning pipeline that is fully effective in Kleene’s sense while remaining typed end-to-end. Training, compilation, and auditing are implemented as explicit recursive procedures whose success criteria are decidable and certified over a semantics-locked finite artifact.

The framework requires neither smoothness nor differentiability. It operates exclusively with discrete witnesses and integer arithmetic, yet exhibits the familiar error-driven behavior associated with gradient-based methods on simple finite tasks. The objective is not to replace SGD, but to identify which learning phenomena persist once analytic assumptions are removed. Even in a setting where the update rule is fully explicit and the entire training trace is replayable, classical limitative results (e.g., Rice’s Theorem) remain operative: stronger interpretability claims are obstructed in principle, not merely by engineering constraints.


Ansatz. We fix a Semantics Lock (deterministic preprocessing, feature extraction, and a stable tie-broken argmax*) and work over an explicit finite witness table. Training proceeds as a refutation-guided repair loop over a sparse integer weight table, paired with a cached score matrix. The loop preserves decidable invariants (well-formedness and cache correctness) and terminates at a decidable certificate gate: “all margin obligations are satisfied on the table.” Operationally, the procedure functions as a discrete analogue of backpropagation—systematic error-triggered updates with propagation through typed dependencies—while remaining fully auditable and replayable.

Concretely, learning is primitive recursion over a finitary state space. At each step, the algorithm selects the earliest witnessed margin failure and applies a local integer repair, propagating updates through postings to maintain cache correctness by construction:

$$ \begin{array}{l} \Sigma_0 \;:=\; (W_0,S_0) \;=\; (0,0),\\[0.6ex] \Sigma_{t+1} \;:=\; \mathsf{Repair}_{\gamma,\Delta}(\Sigma_t),\\[0.8ex] \text{step repairs the first witnessed margin violation } (i^*,c^*),\\[0.4ex] i^* \;=\; \min\{\, i < N \mid \exists c\neq y_i:\ S(i,y_i) < S(i,c)+\gamma \,\},\\[0.4ex] c^* \;=\; \arg\max\nolimits^{\!*}_{c\neq y_{i^*}}\, S(i^*,c). \end{array} $$

Over the finite artifact, success is a decidable property: the certificate gate verifies (i) cache correctness and (ii) satisfaction of all margin obligations under the locked semantics. However, when the learned evaluator is considered extensionally—as an arbitrary effective procedure—stronger interpretability demands (universal invariances, minimal faithful summaries, global equivalence across implementations, or optimality claims) become semantic properties of programs and therefore inherit undecidability and related speedup barriers. Typed Repair does not circumvent these limits; it makes them explicit by separating what can be certified from what cannot.


Analogy. Typed Repair can be viewed as a formal analogue of concept acquisition. The learner begins with a maximally general hypothesis (“everything is red”). A counterexample refutes it (“this frog is not red”), prompting a small discriminating explanation (“firetrucks are red; frogs are green”). The learner then repairs its hypothesis within its current vocabulary (“everything is red or green”) and returns to a total claim. This iterative replacement of false universals by less false universals continues until no counterexamples remain relative to the learner’s finite experience. In simple environments the process converges quickly; in richer settings it remains only conditionally correct relative to an explicitly bounded domain.


Summary. The method exploits the asymmetry that violations are cheaply witnessed (existential, RE-style events), whereas correctness is universal (co-RE in the unbounded case). Accordingly, we maintain a refutation-driven update loop and discharge success only relative to an explicit finite, auditable certificate gate. In this sense the training procedure is “diagonal” operationally: it progresses by systematically refuting a finite universal condition.

Note. I am looking for collaborators: people who will run the artifact, reproduce traces, and either extend the system or break its stated checks. Vision. Typed Repair is well-suited to regimes where the system is repeatedly updated in response to new evidence.

    Aims:
  • Constructive scrutiny of explicit finite execution and effectivity,
  • optimization,
  • organization.

Good first contributions:

  • Replicate: run a preset with fixed parameters/seed and report any trace discrepancy (events, violation counts, certificate state).
  • Break: produce a minimal dataset/setting where a stated check fails without an explicit witness, or where determinism/replayability is violated.
  • Extend: add an alternative update rule or an additional decidable invariant + witness extractor, evaluated under the same fixed readout.

To keep discussions unambiguous, the shared interface is fixed: a finite table, a fixed feature schema, and a stable readout. Only the update rule and additional checks are intended to vary.

Legend
1Navigation
2Tutorial
3Notes
4Comparison via Javascript
5Exposition
5Call to action