I'm really annoyed when lisp languages use infix operators. Is it really so hard to write (= (f a) b) over (f a = b)? In fact, can you even call it lisp if the first element of the list isn't the operation of that list? Perhaps if they had a special bracket type for definitions I would be more amenable to it, but the idea that a symbol half way through a list completely changes its meaning simply doesn't sit right with me. Isn't this just a term rewriting system with an extra pair of parentheses?
Looking at BNF, "=" is not an infix operator here, it's syntactical separator between part describing a function input, and resulting expression. Though I agree it looks a bit foreign anyway
Agreed. This reminds me – and I wonder if it could be applied – to Computational Type Theory, which relies on a similar concept of "reducing" types to their primitive forms, actually taking computation into account (something type theories normally do not!)
I watched all 5 videos a few years ago. And if I understand it correctly "Computational Type Theory" basically defines a type as describing the behaviour of a computation.
Which is really interesting but kinda hard for me to wrap my arms around compared with Martin Loff type theory. It looks as if type checking will have to be done manual, using N axiomatic rules for how you can prove A from B.
Is that your understanding as well?
Also, there doesn't seem to be much material on Computational Type theory online. Any good references? (Except for the nuPrl book of course).
Here's a little zine on multiset rewriting(unordered term rewriting), John Conway said(about Fractran in The Book of Numbers) that it is such a simple paradigm of computation that no book is needed to learn it, and it can be taught in 10 seconds.
No, it can operate on a data structure as well. There's string rewriting which does operate on text (but this can be stored in a structure amenable to applying rewrite rules versus brute force copying it or something silly). For term rewriting, there are plenty of efficient ways to store and operate on the information besides just textually.
Hmm maybe I misunderstand. All the rules must be applied to fixpoint or elimination, for every input right? And the larger the program (rule set) the worse the performance since more rules must be evaluated at each “tick” of the program, unless you play tricks with ordering rules.
Yes, that's often the objective. If they're properly written they will terminate, but not all sets of rules may terminate. It's possible for rules to cause divergence or cycles.
A -> A B (1)
A -> B (2)
B -> A
(1) never terminates, always adding a new B on application but not removing the A. (2) doesn't grow, but never terminates since each term is replaced with the other.
I would argue that it is more "primordial". After all, computation is first and foremost a human activity, generally performed using pen and paper, which involves a good deal of rewriting (computers were originally people). The machine only came later as a way to simulate this human activity. Its meaning is entirely contingent on the primordial notion. It have no meaning on its own.
This is very cool! I'm actually working on something very similiar, as I wanted a scripting language with very robust pattern matching and macro-like capabilities with term rewriting for writing DSLs. Seeing this project is very inspiring, nad it seems we have very similiar ideas, my original idea was also that undefined terms would just be the way to construct data types. However, I want the lazyness to be optional in my language, and for completely undefined terms to throw errors as I think the ergonomics of just treating undefined terms as valid data would be a bad programming experience. In my (upcoming language, hopefully), data would be constructed like this:
pair a b = '(pair a b)
where ' is shorthand for quoting, like in lisp. If you want to explicitly define datatypes on demand, you can just manually quote and not use the pair function constructor. One thing I thought about to was smart constructors, like say if you want to define the mod 3 group:
zero = 'zero
s (s (s x))) = x
s x = '(s x)
if values are always constructed with either "s" or "zero", the smart constructor of s enforces you can't create any terms with an s-chain of 3 or higher, IF the function arguments are eagerly evaluated.
Yes, in term rewriting systems the only thing that matters is the lexical structure of the "program" you're running on top of the transform rules. A simple example of running a TRS is a term of a BNF grammar, a less simple one is a symbolically expanding a term in a computer algebra system.
ok. sorry, so the issue is not that some of the runtime data is being interpreted as types, its that decisions are being made based on the data and not solely on the structure of the program.
In the most general sense yes. However you can encode whatever you want on top of a trs, just like how you can encode any computer language in lambda calculus because both are Turing complete models of computation.
I'm really annoyed when lisp languages use infix operators. Is it really so hard to write (= (f a) b) over (f a = b)? In fact, can you even call it lisp if the first element of the list isn't the operation of that list? Perhaps if they had a special bracket type for definitions I would be more amenable to it, but the idea that a symbol half way through a list completely changes its meaning simply doesn't sit right with me. Isn't this just a term rewriting system with an extra pair of parentheses?
Also, why is this needed over the second line?
Looking at BNF, "=" is not an infix operator here, it's syntactical separator between part describing a function input, and resulting expression. Though I agree it looks a bit foreign anyway
Term re-writing systems are a really interesting way of looking at computation.
It completely abstracts away the concept of a machine, and it's simply translation as computation - but equally as powerful.
Agreed. This reminds me – and I wonder if it could be applied – to Computational Type Theory, which relies on a similar concept of "reducing" types to their primitive forms, actually taking computation into account (something type theories normally do not!)
This lecture series goes into how it works: https://www.youtube.com/watch?v=LE0SSLizYUI
I watched all 5 videos a few years ago. And if I understand it correctly "Computational Type Theory" basically defines a type as describing the behaviour of a computation.
Which is really interesting but kinda hard for me to wrap my arms around compared with Martin Loff type theory. It looks as if type checking will have to be done manual, using N axiomatic rules for how you can prove A from B.
Is that your understanding as well?
Also, there doesn't seem to be much material on Computational Type theory online. Any good references? (Except for the nuPrl book of course).
It's a shame the standard texts are all 20 years old or more than way too heavy mathematically.
A little book for term rewriting would be a great new addition.
Here's a little zine on multiset rewriting(unordered term rewriting), John Conway said(about Fractran in The Book of Numbers) that it is such a simple paradigm of computation that no book is needed to learn it, and it can be taught in 10 seconds.
https://wiki.xxiivv.com/site/pocket_rewriting
I'm somewhat surprised there isn't a semi-mainstream language for it. It's incredibly simple, with very few core concepts yet very powerful.
Similar to LISP in that sense.
There's Pure, but it's not exactly mainstream: https://agraef.github.io/pure-lang/
The foundations of Wolfram Language (Mathematica) are about transformations on symbolic expressions, at least conceptually.
I would think that's still the case.
https://www.researchgate.net/publication/243768023_Mathemati...
Mathematica is at least semi-mainstream. Not sure of any other examples though.
Maude is the most famous one that I know of I think.
https://maude.lcc.uma.es/maude-manual/maude-manualch1.html#x...
I think the issue is performance. A true term rewriting system has to essentially operate on text, right?
No, it can operate on a data structure as well. There's string rewriting which does operate on text (but this can be stored in a structure amenable to applying rewrite rules versus brute force copying it or something silly). For term rewriting, there are plenty of efficient ways to store and operate on the information besides just textually.
Hmm maybe I misunderstand. All the rules must be applied to fixpoint or elimination, for every input right? And the larger the program (rule set) the worse the performance since more rules must be evaluated at each “tick” of the program, unless you play tricks with ordering rules.
Yes, that's often the objective. If they're properly written they will terminate, but not all sets of rules may terminate. It's possible for rules to cause divergence or cycles.
(1) never terminates, always adding a new B on application but not removing the A. (2) doesn't grow, but never terminates since each term is replaced with the other.Not necessarily, I would think terms can be converted to numbers like how the Ruby vm compiles symbols.
I would argue that it is more "primordial". After all, computation is first and foremost a human activity, generally performed using pen and paper, which involves a good deal of rewriting (computers were originally people). The machine only came later as a way to simulate this human activity. Its meaning is entirely contingent on the primordial notion. It have no meaning on its own.
Of course term rewriting has a meaning of its own, it is at the same time more meaningful and simpler as any other form of computation.
This is very cool! I'm actually working on something very similiar, as I wanted a scripting language with very robust pattern matching and macro-like capabilities with term rewriting for writing DSLs. Seeing this project is very inspiring, nad it seems we have very similiar ideas, my original idea was also that undefined terms would just be the way to construct data types. However, I want the lazyness to be optional in my language, and for completely undefined terms to throw errors as I think the ergonomics of just treating undefined terms as valid data would be a bad programming experience. In my (upcoming language, hopefully), data would be constructed like this:
where ' is shorthand for quoting, like in lisp. If you want to explicitly define datatypes on demand, you can just manually quote and not use the pair function constructor. One thing I thought about to was smart constructors, like say if you want to define the mod 3 group: if values are always constructed with either "s" or "zero", the smart constructor of s enforces you can't create any terms with an s-chain of 3 or higher, IF the function arguments are eagerly evaluated.This is vastly more pattern matching than term rewriting. In a term rewriting system you have no types for a start: https://en.wikipedia.org/wiki/Rewriting
is that important here? it looks like semantically types as presented are no more than magic constants to match on
Yes, in term rewriting systems the only thing that matters is the lexical structure of the "program" you're running on top of the transform rules. A simple example of running a TRS is a term of a BNF grammar, a less simple one is a symbolically expanding a term in a computer algebra system.
ok. sorry, so the issue is not that some of the runtime data is being interpreted as types, its that decisions are being made based on the data and not solely on the structure of the program.
In the most general sense yes. However you can encode whatever you want on top of a trs, just like how you can encode any computer language in lambda calculus because both are Turing complete models of computation.
Marty Alain's Lambdaway is another term-rewriting evaluator, of sorts.