Partial evaluation is neat. In short, it’s all about taking an existing program, modifying it to hold some of its inputs as constants, and then letting the compiler/optimizer go hog wild on it. The result is still a program—not a value—and it’s usually faster than the original program.

The usual small example is the power function. If you have a function that takes two arguments, x and y, and returns x^y:

int power(int x, int y) {
  int result = 1;
  for (int i = 0; i < y; i++) {
    result *= x;
  }
  return result;
}

If you partially evaluate this function with respect to y at y = 5, you get a new function that takes one argument, x, and returns x^5:

int power_5(int x) {
  int result = 1;
  for (int i = 0; i < 5; i++) {
    result *= x;
  }
  return result;
}

Now, to you, this might not look that different from the original function. But to an optimizer, it is a new world of opportunity. The optimizer can unroll the loop and remove the conditional:

int power_5(int x) {
  return x * x * x * x * x;
}

weval does that for entire WebAssembly modules. WebAssembly modules that are normally much bigger than a small power function. You might want to use it if, for example, your WebAssembly module is an interpreter. Imagine a world where you have compiled a runtime such as SpiderMonkey or CPython to WebAssembly. You could then run your Python or JavaScript programs on the WebAssembly runtime, but they would be slower than if you had compiled them directly to WebAssembly. And even if you compiled the JS/Python directly to Wasm, it would probably be slow unless your compiler did some fancy static analysis. This is where weval comes in.

This concept – partially evaluating an interpreter with a specific program to make a faster “compiled” program – is known as the first Futamura projection (longer explanation).