Semantic Reasoning Labs
We build models that learn and reason like humans.
Imagine a model where
- Knowledge base is a set of inspectable and human-readable concepts.
- Each reasoning step is mathematically a mapping from knowledge base back to knowledge base, so the steps can be chained indefinitely without leaving the internal representation.
- Reasoning steps are thus also explainable and scrutable.
- Update of the knowledge base is possible during the inference time and has O(1) complexity.
- The entire knowledge base can be the task context, hence the context is limited only by the physical memory.
Our technology cannot write a haiku in the style of Shakespeare though.
Hence, it is not supplanting LLMs. It complements them brilliantly in areas where they are principally weak.
FAQ
Consider a student learning calculus. New data she needs are modest: a textbook, lecture slides, and at most low hundreds of examples (so at most few tens of MB of data in total). As for the compute required, human brain runs on roughly 20 W of energy. So even if the student spent all waking and sleeping hours of entire semester learning just calculus (unlikely), her brain's total energy budget would be roughly the same as that powering a single Nvidia B200 for 2 days.
So it is clearly physically possible to learn from much less data with much lower compute than with current GPT based technologies. Method to do so was just not discovered yet ... Oh, psst. :-)
Much of modern machine learning resembles dog training: repeated exposure, repeated reward, repeated reinforcement (+back prop) until the desired behavior emerges. That works and is also an essential part of human learning. But it is not how humans typically learn structured high-level knowledge like mathematics.
Humans read. Humans think. Sometimes new concepts click-in immediately, sometimes only after revisiting the material or finding a different explanation better matching our existing mental model.
At Semantic Reasoning Labs, this is the sort of learning we are trying to recreate computationally.
Current LLMs take conceptually different path. It is not possible to break their fundamental limitations by hiring thousands of engineers to tweak them. We need fundamentally different approach. We need to start with a blank sheet of paper.
That's where we come in. Oddballs and misfits who do not go with the flow. Who delved into AI looong before it was cool...
Extraordinary claims require extraordinary evidence
Will a demo showing deep reasoning running on an offline laptop count as extraordinary?