alma icon indicating copy to clipboard operation
alma copied to clipboard

Have a way to talk about lvalues in macros that need them

Open masak opened this issue 7 years ago • 27 comments

So, I was thinking about this. I almost feel ready to open a PR about it.

We're in a nice place with this one, because we already have several "waiting clients": #122, #152 and #203. In fact, let's use those as our example.

This protocol centers around objects of type Location; such an object allows us to save values into variables, array elements, object properties, dictionary values, etc. It has three methods:

  • loc.assign(value): stores the value in the location.
  • loc.modify(sub (old) { return ... }): reads the old value in the location, runs the supplied function on it, and stores back the new computed value while also returning it.
  • loc.postModify(sub (old) { return ... }): reads the old value in the location, runs the supplied function on it, and stores back the new computed value but returning the old value.

The main client of the .assign method is the 007 internals; this will be the canonical way to do assignments.

The .modify method figures in all our three use cases; see below. It's very likely that a macro user who wants to develop similar types of macros will want to go for this one.

The .postModify method is for postfix:<++> and postfix:<-->; hence the name. I know of no other uses; of course it's mostly a convenience since you could always do this yourself with .assign if you wanted. I pondered calling .modify .preModify for symmetry, but that would underplay its central importance in the API. Also, .postModify is the strange one, in an asymmetric way.

Now, let's implement prefix:<++> and postfix:<++>:

macro prefix:<++>(operand_ast) {
    return quasi {
        {{{lvalue(operand_ast)}}}.modify(sub (v) {
            return v + 1;
        });
    };
}

macro postfix:<++>(operand_ast) {
    return quasi {
        {{{lvalue(operand_ast)}}}.postModify(sub (v) {
            return v + 1;
        });
    };
}

Next up, assignment metaops:

macro postfix:<op=>(lhs_ast, op_ast, expr_ast) is parsed(/ <infix> "=" <EXPR> /) {
    return quasi {
        {{{lvalue(lhs_ast)}}}.modify(sub (v) {
            return v {{{op_ast @ Q::Infix}}} {{{expr_ast}}};
        });
    };
}

And lastly, the dotty assignment op:

macro postfix:<.=>(lhs_ast, identifier_ast, argumentlist_ast) is parsed(/ ".=" <identifier> "(" <argumentlist> ")" /) {
    return quasi {
        {{{lvalue(lhs_ast)}}}.modify(sub (v) {
            return v.{{{identifier_ast @ Q::Identifier}}}({{{argumentlist_ast @ Q::ArgumentList}}});
        });
    };
}

Note that lvalue(expr) is something that happens at macro time. It returns an opaque object that you're not supposed to be that interested in, but that can go through the same kind of programification as Qnodes, and come out the other end as a Location. I believe this is the first time we allow something other than Qnodes and None through the programification tunnel; maybe it's the start of an exciting trend.

Anyway, the reason I decided to have lvalue(expr) act at macro time is that it limits somewhat the amount of crazy you can do with this API (which is something that concerned me). In order to use it for anything nontrivial, you basically have to start at macro time, so you have to be a macro writer. Hopefully, that'll sell less of program analyzability down the river.

masak avatar Jan 30 '17 07:01 masak

As I come back to this idea only a few days later, I'm not immediately sure what an "assignment protocol" such as .assign and .modify gives us over just writing:

quasi {
    {{{it}}} = {{{it}}} * 2;
}

Etc.

Or, to be precise, I can see how it would be nice in itself to expose this thing from the compiler out into user space (or macro author space) as the Location type. It means the compiler and the macro author are now, semantically, on the same level of power.

But I don't immediately see how it gains us anything compared to just using the assignment operator directly in quasis. Maybe it wasn't obvious before I mapped this out that they'd come down to the same thing? Or maybe I'm missing some big advantage that I used to see? I notice that I am confused.

Even .postModify is in the end just a convenience, not a sine qua non. Here, let me re-implement postfix:<++> without it:

macro postfix:<++>(operand_ast) {
    return quasi {
        my preincrement_value = {{{operand_ast}}};
        {{{operand_ast}}} = {{{operand_ast}}} + 1;
        preincrement_value;
    };
}

I guess what this means is that, even if we turn out to like the idea of an assignment protocol, it's no longer a blocker for #122, #152 and #203. Which I guess is a good thing.

masak avatar Feb 02 '17 07:02 masak

Oh!

Or maybe I'm missing some big advantage that I used to see? I notice that I am confused.

Yes, the assignment protocol is needed. And no, it wasn't obvious, but to a trained macro author's eye it might well be.

Consider again this small non-usage of the assignment protocol:

quasi {
    {{{it}}} = {{{it}}} * 2;
}

Now think about what happens when it is a Qtree for ++foo && bar. That's right, evaluation is not pure and can have side effects.

I term this the "Single Evaluation Rule". With it should come a kind of "allergy" that makes a seasoned macro author spot multiple unquotes of the same thing in a macro, and flag them as very likely bugs. (The exception being, of course, if the macro is control-flowy enough to want to multiply evaluate things. infix:<xx> comes to mind.)

The usual fix to uphold SER is to unquote once and store in a temp variable. That doesn't work with assignments, because we only get an rvalue, which is less than what we need to store something in that location. Hence the need for the assignment protocol.

masak avatar Feb 08 '17 01:02 masak

I feel like it's a good idea to draw a parallel with Common Lisp's setf here.

Note: it's not totally related to the first message, but resonates with that .= and such.


setf sets "places". A place is "something assignable". A basic example (from here):

(defvar *a* 0) -- CL doesn't have toplevel lexical variables, a dynamic one will do.

-- "set q"uoted. This form is more recent (at least for lisp).
(setq *a* 2)

 -- This is the "old way"
(setf (symbol-value '*a*) 2)
-- (the oldest way, just for reference purpose)
(set '*a* 2)

-- As CL is a lisp-2, we'll also desugar a defun:
(defun a () 1)
-- is the same as:
(setf (symbol-function 'a) (lambda () 1))

That's because CL (Lisp-2) uses "slots" for symbols. There's a slot for the value, one for the function. Seems to me like it's an interesting path to explore.

(defvar *xs* (cons 1 nil))
(setf (car *xs*) 2) -- => '(2)

Defining them is pretty easy:

(defvar *user* (list "John" "Doe"))

(defun (setf name) (new-name list)
  (setf (car list) new-name))
(setf (name *user*) "Jane")
(princ (car *user*)) -- Jane

Older versions had defsetf, which created a differently-named function (instead of the cons-named "(setf x)" here). Example for rplaca (almost)

(defsetf car rplaca) -- "replace car" (not exactly correct

Which means we can reveal a trick I mentioned earlier:

(defsetf symbol-value set) -- easily reimplemented

If more control is needed, defsetf provides a long form, along with define-modify-macro:

(define-modify-macro incf (i) +)
(incf 3) -- 4

-- here, long form of defsetf - it must return an evaluatable form, i.e. `(+ 1 2)
(defsetf set (&rest args) (place)
  (append (list 'modifier) args (list place))

-- so this:
(defsetf car rplaca)
-- becomes this (using the built-in setcar):
(defsetf rplaca (car) (place)
  (list 'setcar car place)) -- or `(setcar ,car ,place)

I'm not gonna describe define-setf-method because it seems deprecated and I was unable to find anything on the subject...

It seems there's an even more complex define-setf-expander, but while this one doesn't seem deprecate, I'm not too sure what it's doing - and I can't find much on the internet.

vendethiel avatar Mar 06 '17 21:03 vendethiel

Yes, setf seems quite similar in spirit to what we need here for 007. Even more so after I read your comment, @vendethiel.

masak avatar Mar 07 '17 06:03 masak

Anyway, the reason I decided to have lvalue(expr) act at macro time is that it limits somewhat the amount of crazy you can do with this API (which is something that concerned me). In order to use it for anything nontrivial, you basically have to start at macro time, so you have to be a macro writer. Hopefully, that'll sell less of program analyzability down the river.

Making a mess of things is not that hard:

my global;

macro moo(x) {
    return quasi {
        global = {{{lvalue(x)}}};
    };
}

my state = "everything's fine";
moo(state);

global.assign("spooky action at a distance");
say(state);  # spooky action at a distance

If someone wants a location to leak out into user code, all they have to do is throw it across the fence. I guess we could look into preventing the assigning of Location, but that's likely somewhere beetween impractical and impossible.

But I think I have a much more important reason it has to be {{{lvalue(ast)}}}:

  • The lvalue() call has to be at macro time, because if we put it later, at quasi time, we already have an rvalue and that's too late.
  • The {{{ }}} turns the opaque object that you're not supposed to care about into a Location, but it does this differently depending on the current frame.

An example:

macro moo(x) {
    return quasi {
        {{{lvalue(x)}}}.modify(sub (v) { return v.uc() });
    };
}

my array = ["a", "b", "c"];
for ^array.size() -> i {  # I have to loop like this -- the block param doesn't alias
    moo(array[i]);
}
say(array);    # ["A", "B", "C"]

Though there's only one call to moo (at parse time) and the quasi is only interpolated once, the quasi'd code ends up running three times. Each of those times we get a different Location (corresponding to each element "slot" in array) to assign to. This works because the Location is dynamically computed from the expression array[i] in each frame in turn.

I don't know if I've mentioned it, but the idea behind lvalue() and Location is basically a formalization and API-ization of what currently happens in 007's .put-value method.

masak avatar Jul 15 '17 13:07 masak

I'm really eager to see this one happen, but I'm also filled with a certain unease. My driving question is "What code gets generated for this to work?" That is, once the macro and the quasi have done their work, what's the resulting code that's been injected?

Preferably, the answer to that question should still make sense if we also imagine 007 running with a backend that doesn't interpret the AST directly, but that instead runs some kind of byte code or machine code.

masak avatar Feb 25 '18 07:02 masak

Oh! Wait!

Postulate a built-in macro solidify (name very negotiable, but the analogy here of the code going from not-solid (as in risky) to solid, is pretty apt, I think). The macro turns code like this:

solidify(x.y().foo[1] = x.y().foo[1] + 42);

Into code like this:

my temp9387 = x.y().foo;
temp9387[1] = temp9387[1] + 42;

(Of course that temp9387 is actually a hygienically safe variable.)

The way solidify works is it assumes it gets passed an assignment, and that its left-hand side is either a simple identifier or a postfix like X[2], X["foo"] or (if we choose to make some object properties writable) X.foo.

  • If it finds a simple identifier name on the left-hand side of the assignment, it's trivially done.
  • If it finds one of the postfixes, it extracts the X part of the expression into a fresh temporary (as above), and then does a conceptual search-and-replace of both sides of the assignment, turning every occurrence of the X expression into the temporary variable.

If any of these assumptions fails, an informative error will be thrown at compile time.

This is a vastly better idea than the lvalue stuff above — which I will nevertheless leave in place for posterity. But ignore lvalue for now, and we'll focus on solidify instead.

(First I thought of the X part as a "path", as in, it's a name followed by zero or more postfixes. But that assumption does not hold water, because the X could actually be any expression (including side-effecty things like function calls, increments) as long as it results in a value that can work as a postfixable container of some kind.)

The macro itself is interesting in its own right, as it is a legitimate use case for modifying (or rather, deriving something new from) a macro argument by pattern-matching its inside.

Best of all, it's an excellent representative of a macro because it allows you to write code that feels right, and then under the hood it turns it into code that just works. This is what macros are all about. (Never mind that the problem we're solving was introduced by having macros in the first place.)

Here are the use cases from the OP, instead expressed as solidify. Clearly this is much better.

macro prefix:<++>(v) {
    return quasi {
        solidify({{{v}}} = {{{v}}} + 1);
    };
}

macro postfix:<++>(v) {
    return quasi {
        solidify({{{v}}} = {{{v}}} + 1) - 1;    # paging Dr #279
    };
}

Next up, assignment metaops:

macro postfix:<op=>(lhs, op, expr) is parsed(/ <infix> "=" <EXPR> /) {
    return quasi {
        solidify({{{lhs}}} = {{{lhs}}} {{{op @ Q::Infix}}} {{{expr}}});
    };
}

And lastly, the dotty assignment op:

macro postfix:<.=>(lhs, identifier, argumentlist) is parsed(/ ".=" <identifier> "(" <argumentlist> ")" /) {
    return quasi {
        solidify({{{lhs}}} = {{{lhs}}}.{{{identifier @ Q::Identifier}}}({{{argumentlist @ Q::ArgumentList}}}));
    };
}

masak avatar Feb 25 '18 12:02 masak

I see the point of the macro, but since {{{x}}} = {{{y}}} is always an error, maybe we should instead look at compound assignment forms? Though I guess my idea falls short if we need repeating ({{{v}}} = f({{{v}}}, {{{v}}}+2);).

vendethiel avatar Feb 25 '18 22:02 vendethiel

But... {{{x}}} = {{{y}}} is not always an error, that's the thing. If this were a 100% thing, we'd just outlaw it and move on.

The thing that's a prevalent risk is interpolating the same Qfragment more than once. Since when we have "mutating macros", like the the use cases in this issue, we're pretty much guaranteed to interpolate twice: once for reading, once for assigning.

But the Single Evaluation Rule is more a rule of thumb than an iron-clad thing. See #278 for a counterexample.

masak avatar Feb 26 '18 03:02 masak

Of course, this issue no longer prescribes an assignment protocol as such. With solidify, there's no longer any need for lvalue and Location. The solidify macro is more of a built-in opt-in tool than a protocol.

masak avatar Feb 26 '18 06:02 masak

I'm less enthused by solidify now than back when I thought of it. Or rather, I still think it could be very useful as a tool to make code look simpler and more closely aligned with its intent, but I'm not so sure it should be the fundamental mechanism in 007 for talking about lvalues.

I'm suddenly thinking there's a risk that solidify might pick up false positives: repeated subexpressions in the AST that don't originate from the same AST being unquoted in several times. I guess it would need to work on the quasi before it interpolates. Even that feels like less than a sure thing.

I'm thinking the fundamental mechanism for talking about locations should be something like lvalue after all. The "different Location in different frame" problem still looms large, and means that Location is a reified thing at runtime, not just a compiler construct.

Parts of what makes this hard to think about is that on a bytecode level, there's usually no first-class support for lvalues/locations/slots. Maybe starting in that end would make things fall in place?

masak avatar Jun 12 '18 16:06 masak

Any "value access" instruction which typically returns an rvalue — whether't be a lexical access or some kind of indexed access on a container — factors into first getting hold of the value's underlying Location and then calling .read() on it.

Each lexical and indexed access opcode could have a corresponding one that gives back a Location.

Calling .write() on a location is what the assignment operator/statement code-generates to. (This is a week argument for it being a statement form, IMHO. See #279.)

Whichever solution we arrive at, I would expect it to get the Location once, and then use it for reading and writing as needed.

masak avatar Jun 12 '18 17:06 masak

The "different Location in different frame" problem still looms large, and means that Location is a reified thing at runtime, not just a compiler construct.

I just thought of a simple solution to this: place lvalue() outside of the unquote, in the dynamic parts of the program that can run differently each time around.

lvalue() would still be a built-in macro. Its semantics is "do the lookup, but return the found location instead of its .read() value". The need for the opaque object that passed through unquotes goes away.

This is so nice and simple. Sometimes I'm shocked by my ability to let red herrings obscure the view of, um, much better herrings.

lvalue() can also be used in mainline code. But culturally this should be some kind of weak taboo, since Locations outside of macros can only be used to confusing or malicious ends.

masak avatar Jun 22 '18 10:06 masak

I think I forgot to mention at the time, but the hard-won gist that currently represents our best guess at how macro hygiene will actually be implemented — and see especially the appendix — contains Location as a central part.

Namely, those variable lookups that are "dislodged" because they need to find something inside the macro body (which isn't an OUTER to the mainline) can be precomputed into Locations (and reads/writes) on those, through the lucky happenstance that macros are run exactly once per injectile, and so a lookup from an injectile into a macro is unique, and can therefore be precomputed.

masak avatar Aug 01 '18 18:08 masak

lvalue() can also be used in mainline code. But culturally this should be some kind of weak taboo, since Locations outside of macros can only be used to confusing or malicious ends.

I think in the end usage of lvalue() in mainline code means that some code optimizations will be locally switched off. If escape analysis cannot show that the value doesn't escape, code optimizations will be globally switched off.

masak avatar Nov 08 '18 12:11 masak

If escape analysis cannot show that the value doesn't escape, code optimizations will be globally switched off.

It's possibly worse than that. If we want to have a fair chance at targeting backends other than a dedicated VM, then Location values cannot be allowed to escape, as they might lead to generated code that can't be expressed as lexical accesses.

Hm. Maybe that can be detected in a late-bound fashion? Why does this feel similar in spirit to #388?

masak avatar Nov 29 '18 19:11 masak

As I start implementing Location in a branch, I quickly realize two things:

  • Location values, at least the useful ones, are backed, in the same sense Ints and Arrays are backed — their implementation/semantics does not reside completely in 007 userland itself, but is instead tied to the implementation. This makes a lot of sense, since the whole point of Location values is to expose an otherwise unreachable part of the compiler/runtime.

  • The lvalue macro needs to expand into something; my current take is that this something should be called takeLocation and be part of the built-ins. Although takeLocation is never meant to be called explicitly, I currently have no desire to hide it (in a Symbol or similar). I might reconsider. takeLocation takes as its argument the Qtree representing the access path sent to lvalue.

masak avatar Nov 29 '18 19:11 masak

All through this issue I've pretty consistently used lvalue as the name of the macro. I'm now having second thoughts and want to call it location instead. Why? Because it's basically a factory macro for Location objects anyway, and because lvalue runs the risk of creating associations only with the writing part of a Location, not its reading part.

Speaking of which, I think I nowadays also favor .get and .set as simple names for the methods on a Location. (Not .read and .write as previously.)

I wouldn't be super-averse to calling the whole thing Slot instead (and slot). But right now, Location feels pretty decent from a Huffman perspective.

masak avatar Feb 27 '19 12:02 masak

We'll want this one fairly soon, since both swap (#476) and prefix:<++> (#477) have launched now as examples, both of which need location to comply with the Single Evaluation Rule.

masak avatar Feb 27 '19 12:02 masak

I keep coming back to this issue. It's an important one to macro authors.

There are two related things a macro author might want access to:

  • The "access path"; the expression describing how to get at the value. Might be just a single identifier, or might have any number of indexings, slot accesses, or function calls. If we ever choose to make things like functions and || and ?? !! have lvalue semantics (still an open question), those would qualify at the end too. Access paths are important for macro authors who wish to do something other than evaluate an expression; for example, abstractly evaluate it, or solve equations.

  • Usually, though, what's interesting is only the memory location at the end of the access path. We can get/set/delete it, whereas with normal evaluation, it's as if a get was already done for us. We need this everywhere the Single Evaluation Rule rears its head, which is in most places where we use something more than once.

I've been dealing with C++ a bit lately, for the first time in years. I've come to the conclusion that Location values are semantically identical to C++ references. The one difference is that in 007, we kind of expect them to be a compile-time artifact... Not quite syntactical, but kinda "compile-time-only first-class values". In practice, that might mean that if we can't prove a Location value doesn't escape, we might choose to throw a compile-time error. So, a kind of neutered reference.

We might also consider whether we want to call it Reference<T> instead of Location<T>, simply to appeal/attach to the C++ (and Perl 5) tradition that goes with that word.

I also think there are insights to be had just by studying how C++ handles references. Everything from optimizations to common pitfalls — all of it might apply to and inform our use cases.

masak avatar Jun 01 '19 09:06 masak

I just came across a page that says FORTRAN can be so fast because it lacks pointers/references. A typical case of "weaker is better".

On the other hand, this issue is open (and long) because there's a real need here. Macros need to talk about lvalues, and (seemingly inevitably) they need to be reified so that we can choose not to accidentally dereference them multiple times.

masak avatar Jul 08 '19 12:07 masak

(Off-topic: will I see you at Riga? There could be some on-site bikeshedding if so)

The FORTRAN thing is only due to one thing: no aliasing is guaranteed. C has since then gained a keyword, restrict, so that compilers can optimize that way as well. Just the 2cts for that

vendethiel avatar Jul 08 '19 17:07 vendethiel

@vendethiel Yes, I heard you're coming too. :smile: It's a few months ago, but we talked about me going already. I have a talk there about 007. https://github.com/masak/007/issues/481#issuecomment-483844841

Yes, "no aliasing" makes a lot of sense. I'm still hoping that most cases of lvalues in 007 will be possible to optimize away at compile time. Maybe they'll jump a function boundary or two, but that's all. We shall see.

masak avatar Jul 08 '19 22:07 masak

Above discussion is already pretty good, and makes most of the main points I wanted to make. We're in a bind here, for the following reason:

  • Adding first-class lvalues/locations to the language seems absolutely foolhardy, because it potentially de-optimizes everything the way references do in C++.
  • Adding first-class lvalues/locations to the language seems necessary (?) for two reasons, from what I can see:
    1. A use case such as swap needs to precisely control reading/writing of locations, for which it needs to refer to these locations.
    2. The "access path" use cases apply with things like exists, delete, and possibly other things that want to play with evaluation rules.

All this time, I've been wanting to go "screw it, let's just implement @vendethiel's setf design" (which is a design that seems to have stood the test of time). There's any number of ways we could do that. But... then I remember that having a solution which is not first-class, while a great relief in some ways, would not address points 1 and 2 above.

Design-wise, we're trapped between a rock and a hard place. Looking at the swap use case, I guess things would be largely OK if we just took enough measures to prevent those lvalues from escaping.

One thing that this comment fails to report is the (absolutely frightening) realization that if you take a location on array[5] (or whatever) and then clear the array of elements, your location is now... what? invalid? Do we need to think of lvalue invalidation the way C++ STL thinks about iterator invalidation? I don't really see how we can avoid it, as soon as we start taking references to memory locations, which might then change from under us.

Tackling this issue sometimes feels like grabbing hold of a small fish in the water, only to realize that it's not a fish, it's the largest mammal on Earth — and now neither letting go or catching your prey is really an option.

masak avatar Oct 19 '21 09:10 masak

Heh. Also, this comment correctly points out that #410 hygiene could be implemented using locations/lvalues. Not first-class ones, since they are only needed in a layer beneath the code itself, so to speak. Similar to how some compilers implement assignment to lexical variables by compiling those variables to a 1-element vector (or equivalent). (Since it's all lexical, we know exactly which variables we will need to do that to!)

What I realized the other day (because the ROADMAP says so) is that the hygiene/lvalues equivalence cuts both ways: whether we provide first-class lvalues or not, hygienic first-class code objects (produced by macro arguments and quasi terms) will be functionally equivalent to first-class lvalues. That's right: while this issue is deliberating the scary consequences of introducing first-class lvalues into Alma, code quotes are already first-class lvalues.

(Edit: 😱)

masak avatar Oct 19 '21 09:10 masak

What I realized the other day (because the ROADMAP says so) is that the hygiene/lvalues equivalence cuts both ways: whether we provide first-class lvalues or not, hygienic first-class code objects (produced by macro arguments and quasi terms) will be functionally equivalent to first-class lvalues. That's right: while this issue is deliberating the scary consequences of introducing first-class lvalues into Alma, code quotes are already first-class lvalues.

I'm not sure it's that bad; that is, I don't automatically agree with October-me.

Consider:

macro moo() {
    my array;
    return quote {
        array = [0, 2, 3];
        array[0] = 1;
    };
}

This macro, when called, would generate code that provides write access to an array (labeled with the variable name array). Each macro call site would get its own fresh array (because each macro call frame has its own which it bestows upon the generated code). The generated code clearly has write access both to the array and its elements, but it does not seem to be the wild kind of first-class lvalue write access that the comment I'm replying to made a big deal about.

Specifically, the array[0] does refer to an anonymous "element lvalue" of the array, but this element remains non-first-class. If you did your worst and shipped around a quote { array[0] } who-knows-where, eventually assigning to it, all you would be able to do is access (the lexical variable) array from the macro body, and make a (late-bound) lookup on it. The late-bound lookup means we have no means of carrying "invalidated" array element cells around.

It's possible that if we introduced some form of "access path", we'd also be in trouble. Depends how it's done.

masak avatar Mar 31 '22 02:03 masak

Somewhat in the manner of zombies, my old long-running issue threads have a way of laying dormant for long stretches of time, and then get up and assault me in inconvenient ways.

Around the 20 minute mark in this talk, Walid Taha introduces a very simple inlining macro mechanism in ML, with this example code:

let mac word_size = raise Unknown_size
    in iterate shift_left word_size end

And then immediately turns around and says

[...] if we make such an extension in this particular manner, then we immediately lose one of the basic properties in call-by-value languages, which is that we can treat any variable as a value. In this case, word_size is not a value. So if we do a substitution where we assume that it's a value, and it disappears, or duplicates, then we would be duplicating an effect. So in fact, even though this looks like a straightforward thing to do in terms of introducing macros to the language, it's a bad design choice. Here, introducing macros like this would mean that we would have things that look like variables, but they're not values.

He instead evolves the added feature to look like this:

let mac word_size () = raise Unknown_size
    in iterate shift_left (word_size ()) end

And says.

Whenever we want to define a macro, it looks like a computation. That takes care of that problem.

masak avatar Jun 16 '23 06:06 masak