CLASS.METHOD
Summary

TYPED REPAIR

We present a perceptron that is fully lambda-definable and whose outputs admit independent certification.

Key materials:

  • Downloadable EMNIST Jupyter Notebook GitHub
  • Draft Paper .pdf

Secondary literature and prerequisites:

  • Kleene: Introduction to Metamathematics DOI
  • Troelstra, van Dalen: Constructivism in Mathematics: An Introduction DOI
  • On Stochastic Gradient Descent Wikipedia
  • On Type Systems Wikipedia

On Typed Repair.

The central contribution is a verification-shaped interface for learned evaluators, not a rhetorical claim about “interpretability.” The semantics lock $\mathcal{L}$ removes hidden degrees of freedom, exposes decidable certificate predicates over explicit finite artifacts, and introduces a specialization gate that states when a learned evaluator may replace a reference one.

We compare Typed Repair with Gradient Descent under a shared, fully explicit evaluation protocol.

Note. For a more expository account of the motivation and interpretation, see the discussion at the bottom of the page.

Both are implemented in JavaScript and both are judged using the same locked prediction rule $\arg\max\nolimits^{*}$, but the aim is analytical rather than competitive. We strip away what is incidental to semantics (stochasticity, ambiguous preprocessing, shifting criteria) and retain only a small, replayable object: a finite interpretation with a deterministic readout.

With the extensional interface fixed, we vary only the intensional update rule and run two traces over the same scan schedule:

(i) We fix the witness table, the feature schema (bias, unary facts), the number of classes, and a deterministic scan order. Each cycle therefore has an unambiguous definition and can be replayed exactly.

(ii) Typed Repair performs refutation-guided repairs on a sparse integer table $W$ and propagates consequences through postings into a cached score matrix $S$. Gradient Descent performs SGD updates on softmax cross-entropy with floating weights, while prediction is still read out by the same locked $\arg\max\nolimits^{*}$.

(iii) We report only what the traces themselves make checkable. The Typed Repair trace exposes additional decidable invariants and produces concrete witnesses when they fail. The GD trace reports loss and accuracy computed under the same locked readout.

$$\mathsf{WF},\mathsf{CACHE.ok},\mathsf{FULL.ok}$$

Result. A side-by-side view of state evolution under two update rules: which properties are maintained, how violations are witnessed, and what can be verified directly from the explicit finite execution.

We start with linearly separable datasets, e.g. $\mathrm{NAND/OR/AND}$ where a finite-table margin certificate is typically reachable. Alternatively, we can examine “harder” cases, some of which are not separable in this setting, e.g. $\textrm{XOR}$ or $\textrm{Parity-of-3}$.

Demo

Our realization builds two traces under a shared semantics lock and dataset, enabling cycle-by-cycle comparison under one readout rule.


Step 1. Choose a preset:

Display (editable):

Note. Inputs are separated by spaces, then “|”, then class label. Example: 0 1 | 1.


Step 2. Optionally choose the run budget and update-rule parameters.

Shared budget (cycles C): .

Seed:

Scan order:

For Typed Repair, Margin $\gamma$: , Overshoot $\Delta$: .

For Gradient Descent, Learning rate $\eta$: .


Step 3. Generate the two traces.

Build status:
Not built.

The “Semantics Lock” $\mathcal{L}$ is shared and records what is fixed: $K$ classes, $\mathrm{N}$ examples, $F$ features, prediction, $\arg\max\nolimits^{*}$, and the scan schedule. We hold the extensional interface fixed and vary only the intensional update rule.

Semantics Lock:


Step 4. Analyze the results. Both traces run a scan schedule through the finite table for the requested number of epochs. A “cycle” means one repair or one stochastic update.

(a) Typed Repair (integer, refutation-guided, cached)

State:

WF: ? FULL.ok: ? CACHE.ok: ?
Cycle selection:

Snapshot:

Waiting for build.

Event log:

Waiting for build.

Interpretation:


(b) Gradient Descent (softmax cross-entropy, SGD)
State:
acc: ? loss: ?
Cycle selection:

Snapshot:

Event log:

Waiting for build.

Interpretation:

Waiting for build.

Ansatz. We fix deterministic preprocessing, feature extraction, and a stable tie-broken $\arg\max^{*}$ over an explicit finite witness table. Training then proceeds as a refutation-guided repair loop over a sparse integer weight table and a cached score matrix, preserving decidable invariants and terminating at a checkable certificate gate (“all margin obligations are satisfied on the table”).

Aims:

  • Claim. Training should produce a finite execution that is replayable and externally checkable.
  • Method. Keep the extensional interface fixed, vary only the update rule, and expose witness-bearing failure modes.
  • Gate. Accept a learned evaluator only after finite validation under the lock $\mathcal{L}$.

This is the point at which the usual “degrees of freedom” are intentionally removed: there is no ambiguity about preprocessing, tie-breaking, stochasticity, or evaluation criteria. The resulting object is a finite, replayable execution whose states can be re-checked without trusting the trainer.

Training manipulates an explicit, finitary state

$$ \Sigma \;\equiv\; \{\;W, S, ...\;\}, $$

where $W$ is a sparse integer weight table and $S$ is a cached score matrix (maintained via postings). Each snapshot exposes decidable predicates with witness extractors on failure:

$$ \begin{array}{l|l} \mathsf{WF}(\Sigma) & \text{Well-formedness of the finite representation}\\[0.4ex] \mathsf{CACHE.ok}(\Sigma) & \text{Cache correctness}\\[0.4ex] \mathsf{FULL.ok}(\Sigma) & \text{Zero-violation / margin closure by } \arg\max^{*}\\ \end{array} $$

When a predicate fails, the system returns a concrete witness (an explicit index and class showing a cache discrepancy, or an explicit earliest margin counterexample). When FULL.ok holds, success is not asserted as a global generalization claim; it is a verified statement about the finite artifact under L.


The learner proceeds by refutation-guided updates. At each step it selects the earliest witnessed margin violation and performs a local integer repair, propagating consequences through postings to maintain cache correctness by construction:

$$ \begin{array}{l} \Sigma_0 := (W_0,S_0) = (0,0),\\[0.6ex] \Sigma_{t+1} := \mathsf{Repair}_{\gamma,\Delta}(\Sigma_t),\\[0.8ex] i^* = \min\{\, i < N \mid \exists c\neq y_i:\ S(i,y_i) < S(i,c)+\gamma \,\},\\[0.4ex] c^* = \arg\max\nolimits^{\!*}_{c\neq y_{i^*}}\, S(i^*,c). \end{array} $$

Operationally this resembles error-driven methods (perceptron-like in spirit), but the intended novelty is the surrounding constructive discipline: the semantics lock, the explicit finitary state, the decidable certificates, and the witness-bearing failure modes. In this sense the update step is not the contribution; the isolated $\lambda$ abstraction as learning is.

Let $\textsf{E}_{\mathsf{ref}}$ be a reference evaluator induced by the lock $\mathcal{L}$ (e.g. recomputation from $\Phi$ and a declared scoring rule), and let $\textsf{E}_{\mathsf{learn}}$ be the learned evaluator encoded by a terminal snapshot $\Sigma_T$. We permit replacing $\textsf{E}_{\mathsf{ref}}$ by $\textsf{E}_{\mathsf{learn}}$ only after a finite validation pass discharges a checkable obligation such as

$$ \forall i < \mathrm{N} : \textsf{E}_{\mathsf{learn}}(x_i) = \textsf{E}_{\mathsf{ref}}(x_i) $$

When this gate fails, it fails with a concrete counterexample $i$.


Limitation. Seen extensionally, the model is still “just a perceptron.” What changes is not what can be represented, but what is known about training: updates are deterministic, refutation-driven, and auditable. The project does not dissolve the black box; it argues that opacity is intrinsic once learning amounts to selecting a program in an expressive language, where intensional structure is underdetermined by behavior. In that sense, the project explains the persistence of black boxes rather than eliminating them: If a system already contains a hard core—underdetermination, intractable minimization, undecidable properties—then adding complexity typically does not remove that core.

Note. Collaborators who will want to reproduce traces, and either extend the system or break its stated checks are invited. Vision. Typed Repair might be well-suited to regimes where the system is repeatedly updated in response to new evidence. We aim for constructive scrutiny of explicit finite execution and effectivity and optimazation routines.

Legend
1Navigation
2Tutorial
3Notes
4Comparison via JavaScript
5Exposition
6Invitation