The JavaScript API for HTML5 canvas element is already provided by Sunroof in the module `Language.Sunroof.JS.Canvas`

. Lets look how we can render one line of the clock face using Sunroof:

```
c # save
-- Draw one of the indicator lines
c # beginPath
c # moveTo (0, -u * 1.0)
ifB (n `mod` 5 ==* 0)
(c # lineTo (0, -u * 0.8)) -- Minute line
(c # lineTo (0, -u * 0.9)) -- Hour line
ifB (n `mod` 15 ==* 0)
(c # setLineWidth 8 ) -- Quarter line
(c # setLineWidth 3 ) -- Non-Quarter line
c # stroke
c # closePath
-- Draw of the hour numbers
ifB (n `mod` 5 ==* 0)
(do
c # translate (-u * 0.75, 0)
c # rotate (-2 * pi / 4)
c # fillText (cast $ n `div` 5) (0, 0)
) (return ())
c # restore
```

The monadic `do`

-notation is used for sequencing JavaScript statements in a neat fashion.

The first few lines probably look familiar to people who have written JavaScript before.

```
c # save
-- Draw one of the indicator lines
c # beginPath
c # moveTo (0, -u * 1.0)
```

The `#`

-operator is used instead of the `.`

-operator in JavaScript. `u`

represents the radius of the clock. Knowing this you can see that we are calling methods on the JavaScript object `c`

(Our canvas context). The methods without parameters do not require empty parenthesis, as a Haskell programmer would expect. The tuple used in the call of `moveTo`

is only there to indicate that this parameter is a coordinate, not two single numbers. You can also see that JavaScript numbers are neatly embedded using the `Num`

-class and can be used naturally.

The next few lines show a branch.

```
ifB (n `mod` 5 ==* 0)
(c # lineTo (0, -u * 0.8)) -- Minute line
(c # lineTo (0, -u * 0.9)) -- Hour line
```

Haskell lacks the possibilities to deep embed branches and boolean expressions. For that reason we use the `Data.Boolean`

package. Instead of `if-then-else`

you are required to use `ifB`

when writing JavaScript.

```
ifB (n `mod` 5 ==* 0)
(do
c # translate (-u * 0.75, 0)
c # rotate (-2 * pi / 4)
c # fillText (cast $ n `div` 5) (0, 0)
) (return ())
```

Note the `cast`

operation in line five. As Haskell’s type system is more restrictive then the one used in JavaScript, we sometimes have to `cast`

one value to another. This may seem more complicated then writing JavaScript by hand, but when using the API correctly (by not working around it) compile time errors can show mistakes in the code early.

Getting back to the initial code block: How do we render the other 59 lines of the clock face? We just wrap this code into a function. Of course, we do this at JavaScript level.

```
renderClockFaceLine <- function $ \
( c :: JSCanvas
, u :: JSNumber
, n :: JSNumber) -> do
...
```

We have just created the JavaScript function `renderClockFaceLine`

with three parameters. So lets render the complete clock face using the `forEach`

-method provided by arrays.

```
c # save
c # rotate (2 * pi / 4) -- 0 degrees is at the top
-- Draw all hour lines.
lines <- array [1..60::Int]
lines # forEach $ \n -> do
c # save
c # rotate ((2 * pi / 60) * n)
renderClockFaceLine $$ (c, u, n)
c # restore
c # restore -- Undo all the rotation.
```

The `array`

combinator converts the list into a JavaScript array. The supplied function for the loop body takes the current element as a parameter. In the loop body you can see how the `$$`

-operator is used just as the `$`

-operator in Haskell to apply a JavaScript function to arguments. As the usefulness of partial application is questionable in the context of deep embedded JavaScript, we only allow uncurried functions.

Using these techniques we can render the clock with about 90 lines of Haskell.

```
clockJS :: JS A (JSFunction () ())
clockJS = function $ \() -> do
-- Renders a single line (with number) of the clock face.
renderClockFaceLine <- function $ \
( c :: JSCanvas
, u :: JSNumber
, n :: JSNumber) -> do
c # save
-- Draw one of the indicator lines
c # beginPath
c # moveTo (0, -u * 1.0)
ifB (n `mod` 5 ==* 0)
(c # lineTo (0, -u * 0.8)) -- Minute line
(c # lineTo (0, -u * 0.9)) -- Hour line
ifB (n `mod` 15 ==* 0)
(c # setLineWidth 8 ) -- Quarter line
(c # setLineWidth 3 ) -- Non-Quarter line
c # stroke
c # closePath
-- Draw of the hour numbers
ifB (n `mod` 5 ==* 0)
(do
c # translate (-u * 0.75, 0)
c # rotate (-2 * pi / 4)
c # fillText (cast $ n `div` 5) (0, 0)
) (return ())
c # restore
-- Renders a single clock pointer.
renderClockPointer <- function $ \
( c :: JSCanvas
, u :: JSNumber
, angle :: JSNumber
, width :: JSNumber
, len :: JSNumber) -> do
c # save
c # setLineCap "round"
c # rotate angle
c # setLineWidth width
c # beginPath
c # moveTo (0, u * 0.1)
c # lineTo (0, -u * len)
c # stroke
c # closePath
c # restore
-- Renders the clocks pointers for hours, minutes and seconds.
renderClockPointers <- function $ \(c :: JSCanvas, u :: JSNumber) -> do
(h, m, s) <- currentTime
c # save
c # setLineCap "round"
-- Hour pointer
renderClockPointer $$
(c, u, (2 * pi / 12) * ((h `mod` 12) + (m `mod` 60) / 60), 15, 0.4)
-- Minute pointer
renderClockPointer $$
( c, u, (2 * pi / 60) * ((m `mod` 60) + (s `mod` 60) / 60), 10, 0.7)
-- Second pointer
c # setStrokeStyle "red"
renderClockPointer $$ ( c, u, (2 * pi / 60) * (s `mod` 60), 4, 0.9)
-- Restore everything
c # restore
-- Renders the complete face of the clock, without pointers.
renderClockFace <- function $ \(c :: JSCanvas, u :: JSNumber) -> do
c # save
c # rotate (2 * pi / 4) -- 0 degrees is at the top
-- Draw all hour lines.
lines <- array [1..60::Int]
lines # forEach $ \n -> do
c # save
c # rotate ((2 * pi / 60) * n)
renderClockFaceLine $$ (c, u, n)
c # restore
c # restore -- Undo all the rotation.
-- Renders the complete clock.
renderClock <- continuation $ \() -> do
u <- clockUnit
(w,h) <- canvasSize
c <- context
-- Basic setup
c # save
c # setFillStyle "black"
c # setStrokeStyle "black"
c # setLineCap "round"
c # setTextAlign "center"
c # setFont ((cast $ u * 0.1) <> "px serif")
c # setTextBaseline "top"
c # clearRect (0,0) (w,h)
c # translate (w / 2, h / 2)
-- Draw all hour lines.
renderClockFace $$ (c, u)
-- Draw the clock pointers
renderClockPointers $$ (c, u)
c # restore
return ()
window # setInterval (goto renderClock) 1000
-- and draw one now, rather than wait till later
goto renderClock ()
return ()
```

Using the `sunroofCompileJSA`

function we can compile the deep embedded JavaScript into a string of actual JavaScript.

```
sunroofCompileJSA def "main" clockJS >>= writeFile "main.js"
```

The compiled string will contain a function `main`

that executes our JavaScript. This is then called in the HTML file to execute.

There are a few small utilities used in the code. The current time is perceived by `currentTime`

which uses the JavaScript date API provided by the module `Language.Sunroof.JS.Date`

.

```
currentTime :: JS A (JSNumber, JSNumber, JSNumber)
currentTime = do
date <- newDate ()
h <- date # getHours
m <- date # getMinutes
s <- date # getSeconds
return (h, m, s)
```

Note that this will literally copy the JavaScript produced by `currentTime`

to where it is used, because it is not abstracted to a function in JavaScript. Every time you write Sunroof code that is not wrapped in a function, the Haskell binding will work like a macro.

The other helpers are just shortcuts to get certain values:

```
canvas :: JS A JSObject
canvas = document # getElementById "canvas"
context :: JS A JSCanvas
context = canvas >>= getContext "2d"
clockUnit :: JS A JSNumber
clockUnit = do
(w, h) <- canvasSize
return $ (maxB w h) / 2
canvasSize :: JS A (JSNumber, JSNumber)
canvasSize = do
c <- jQuery "#canvas"
w <- c # invoke "innerWidth" ()
h <- c # invoke "innerHeight" ()
return (w, h)
```

You can see the clock in action here.

As you can see Sunroof mirrors JavaScript closely, and allows access to the capabilities a browser provides. But is this Haskell for Haskell’s sake? We do not think so:

- Sunroof is a deeply embedded DSL, so it is easy to write functions that generate custom code.
- Sunroof provides some level of type safely on top of JavaScript, including typed arrays, finite maps, functions and continuations.
- Sunroof also offers an abstraction over the JavaScript threading model, by providing two types of threads, atomic and (cooperatively) blocking. On top of this, Sunroof provides some Haskell concurrency patterns

like`MVar`

or`Chan`

(`JSMVar`

and`JSChan`

). - Furthermore, the
`sunroof-server`

package offers a ready to use web-server to deploy generated JavaScript on the fly. It enables you to interleave Haskell and JavaScript computations as needed, through synchronous or asynchronous remote procedure calls.

A number of examples and a tutorial is provided on GitHub. Their Haskell sources can be found on github, they are part of the `sunroof-examples`

package.

`Set`

type (from Data.Set) cannot be made an instance of `Monad`

because of an `Ord`

constraint on its desired binding operation:
```
returnSet :: a -> Set a
returnSet = singleton
bindSet :: Ord b => Set a -> (a -> Set b) -> Set b
bindSet sa k = unions (map k (toList sa))
```

However, despite being the classic example, in some ways it’s not a very good example, because the constraint appears only on the second type parameter of `bindSet`

, not on the first type parameter, nor on `returnSet`

.

Another example of the problem also arises in the context of embedded domain-specific languages. When constructing a deep embedding of a computation that will later be compiled, it is often necessary to restrict the involved types to those that can be reified to the target language. For example:

```
data EDSL :: * -> * where
Value :: Reifiable a => a -> EDSL a
...
Return :: Reifiable a => a -> EDSL a
Bind :: (Reifiable a, Reifiable b) =>
EDSL a -> (a -> EDSL b) -> EDSL b
```

While we can construct a computation using `Return`

and `Bind`

, we cannot declare a `Monad`

instance using those constructors because of the `Reifiable`

class constraint.

(Note: if you want to try out the code in this post, you’ll need the following:

```
{-# LANGUAGE GADTs, MultiParamTypeClasses, KindSignatures,
ConstraintKinds, TypeFamilies, RankNTypes,
InstanceSigs, ScopedTypeVariables #-}
import GHC.Exts (Constraint)
import Data.Set hiding (map)
```

)

There have been numerous solutions proposed to address this problem. John Hughes suggested extending Haskell with Restricted Data Types: data types with attached class constraints. In the same paper, Hughes also suggested defining Restricted Type Classes: type classes that take a constraint as a parameter and impose it on all polymorphic type variables in the class methods. This latter approach was simulated several times (by Oleg Kiselyov and Ganesh Sittampalam, amongst others), before the constraint-kinds extension made it possible to encode it directly:

```
class RMonad (c :: * -> Constraint) (m :: * -> *) where
return :: c a => a -> m a
(>>=) :: (c a, c b) => m a -> (a -> m b) -> m b
```

It is then straightforward to define instances that require class constraints:

```
instance RMonad Reifiable EDSL where
return = Return
(>>=) = Bind
```

However, restricted type classes are new type classes: using them doesn’t allow compatibility with existing type classes. If restricted type classes were already used everywhere instead of the original type classes then there would be no problem, but this is not the case. A variant of restricted type classes (suggested by Orchard and Schrijvers is to use an associated type function with a default instance:

```
class Monad (m :: * -> *) where
type Con m (a :: *) :: Constraint
type Con m a = ()
return :: Con m a => a -> m a
(>>=) :: (Con m a, Con m b) => m a -> (a -> m b) -> m b
instance Monad EDSL where
type Con EDSL a = Reifiable a
return = Return
(>>=) = Bind
```

~~An attraction of this approach is that this type class could replace the existing Monad class in the standard libraries, ~~ EDIT: Edward Kmett points out that this claim is not true (see comment below). Any code that is polymorphic in an arbitrary monad *without breaking any existing code*.`m`

would be broken, as the unknown constraint `Con m`

will need to be satisfied.

If we don’t want to modify the type class, then the alternative is to modify the data type. Specifically, we need to modify it in such a way that we can declare the type-class instance we want, but such that the operations of that type class will correspond to the operations we desired on the original data type. For monads, one way to do this is to use continuations, as demonstrated by Persson et al. An alternative (and, in our opinion, more intuitive) way to achieve the same effect is to construct a *deep embedding* of the computation, and restructure it into a *normal form*. The normal form we use is the same one used by Unimo and Operational, and consists of a sequence of right-nested `>>=`

s terminating with a `return`

:

The first argument to each `>>=`

is a value of the original data type, which we will call *primitive operations* (a.k.a. "non-proper morphisms", "effect basis", or "instructions sets").

The key feature of the normal form is that every type either appears as a type parameter on a primitive operation, or appears as the top-level type parameter of the computation. Consequently, if we enforce that all primitives have constrained type parameters, then only the top-level type parameter can remain unconstrained (which is easy to deal with, as we will show later). We can represent this using the following deep embedding:

```
data NM :: (* -> Constraint) -> (* -> *) -> * -> * where
Return :: a -> NM c t a
Bind :: c x => t x -> (x -> NM c t a) -> NM c t a
```

The `t`

parameter is the type of the primitive operations (e.g. `Set`

), and `c`

is the class constraint (e.g. `Ord`

).

We can define a `Monad`

instance for this deep embedding, which applies the monad laws to restructure the computation into the normal form during construction (just like the Operational package.)

```
instance Monad (NM c t) where
return :: a -> NM c t a
return = Return
(>>=) :: NM c t a -> (a -> NM c t b) -> NM c t b
(Return a) >>= k = k a -- left identity
(Bind ta h) >>= k = Bind ta (\ a -> h a >>= k) -- associativity
```

Primitive operations can be lifted into the `NM`

type by applying the remaining monad law:

```
liftNM :: c a => t a -> NM c t a
liftNM ta = Bind ta Return -- right identity
```

Notice that only primitive operations with constrained type parameters can be lifted, thereby preventing any unconstrained types infiltrating the computation.

Once a computation has been constructed, it can then be interpreted in whatever way is desired. In many cases (e.g. the `Set`

monad), we want to interpret it as the same type as the primitive operations. This can be achieved by the following lowering function, which takes interpretations for `return`

and `>>=`

as arguments.

```
lowerNM :: forall a c t. (a -> t a) ->
(forall x. c x => t x -> (x -> t a) -> t a) -> NM c t a -> t a
lowerNM ret bind = lowerNM'
where
lowerNM' :: NM c t a -> t a
lowerNM' (Return a) = ret a
lowerNM' (Bind tx k) = bind tx (lowerNM' . k)
```

Because the top-level type parameter of the computation is visible, we can (crucially) also constrain that type. For example, we can lower a monadic `Set`

computation as follows:

```
lowerSet :: Ord a => NM Ord Set a -> Set a
lowerSet = lowerNM singleton bindSet
```

This approach is essentially how the AsMonad transformer from the RMonad library is implemented.

The idea of defining a deep embedding of a normal form that only contains constrained types is not specific to monads, but can be applied to any type class with a normal form such that all types appears as parameters on primitive operations, or as a top-level type parameter. We’ve just written a paper about this, which is available online along with accompanying code. The code for our principal solution is also available on Hackage.

]]>This post just overviews the basics, and gives a simple example of usage.

KURE is based around the following data type:

```
data Translate c m a b = Translate {apply :: c -> a -> m b}
translate :: (c -> a -> m b) -> Translate c m a b
translate = Translate
```

There`s a lot of type parameters, but the essential idea is that Translate represents a transformation that can be applied to a value of type `a`

in a context `c`

, and produces a value of type `b`

in the monad `m`

. Actually, we require `m`

to be a `MonadPlus`

, as this allows us to encode notions of success and failure, which are integral to strategic programming. Specifically, `mzero`

represents failure and `mplus`

is a “catch” for both `mzero`

and `fail`

. To avoid clutter we`ll omit the class constraints, but just imagine that wherever you see an `m`

there`s a `(MonadPlus m => ...)`

to go with it.

We also define a synonym for the special case when the result and argument type coincide:

`type Rewrite c m a = Translate c m a a`

Translate itself forms a monad (and an arrow, and a bunch of other structures besides), which provides us with a lot of combinators for free. Two key definitions are composition and bind:

```
(>>>) :: Translate c m a b -> Translate c m b d -> Translate c m a d
t1 >>> t2 = translate $ \ c -> apply t1 c >=> apply t2 c
(>>=) :: Translate c m a b -> (b -> Translate c m a d) -> Translate c m a d
t >>= f = translate $ \ c a -> do b <- apply t c a
apply (f b) c a
```

Observe the difference: composition takes the result of the first translation as the argument to the second translation, whereas bind uses the result to *determine* the second translation, but then applies that second translation to the original argument.

Another useful combinator is `<+>`

(from the `ArrowPlus`

class), which acts as a catch for `Translate`

:

```
(<+>) :: Translate c m a b -> Translate c m a b -> Translate c m a b
t1 <+> t2 = translate $ \ c a -> apply t1 c a `mplus` apply t2 c a
```

We can now write strategic programming code, such as the classic `try`

combinator:

```
tryR :: Rewrite c m a -> Rewrite c m a
tryR r = r <+> idR
```

Where `idR`

is the identity rewrite:

```
idR : Rewrite c m a
idR = translate $ \ _ -> return
```

Finally, one combinator new to this version of KURE is sequential composition of rewrites that allows one rewrite to fail:

`(>+>) :: Rewrite c m a -> Rewrite c m a -> Rewrite c m a`

Now let`s consider an example. Take a data type of arithmetic expressions augmented with a Fibonacci primitive:

`data Arith = Lit Int | Add Arith Arith | Sub Arith Arith | Fib Arith`

To keep things simple, we`ll work with an empty context, and use `Maybe`

as our `MonadPlus`

:

`type RewriteA = Rewrite () Maybe Arith`

Let`s start with some rewrites that perform basic arithmetic simplification:

```
addLitR :: RewriteA
addLitR = do Add (Lit m) (Lit n) <- idR
return (Lit (m + n))
subLitR :: RewriteA
subLitR = do Sub (Lit m) (Lit n) <- idR
return (Lit (m - n))
```

We`re exploiting the fact that `Translate`

is a monad to use do-notation – something we have found extremely convenient. If the pattern match fails, this will just trigger the fail method of the monad, which we can then catch as desired.

Using `>+>`

, we can combine these two rewrites into a single rewrite for arithmetic simplification:

```
arithR :: RewriteA
arithR = addLitR >+> subLitR
```

Next a more interesting rewrite, unfolding the definition of Fibonacci:

```
fibLitR :: RewriteA
fibLitR = do Fib (Lit n) <- idR
case n of
0 -> return (Lit 0)
1 -> return (Lit 1)
_ -> return (Add (Fib (Sub (Lit n) (Lit 1)))
(Fib (Sub (Lit n) (Lit 2)))
)
```

Thus far, we`ve only discussed rewrites that apply to the entire data structure we`re working with. But a key feature of KURE (and strategic programming) is the ability to traverse a structure applying rewrites to specific locations. For example, the `anyR`

combinator applies a rewrite to each *immediate child* of a node, succeeding if any of those rewrites succeed:

`anyR :: RewriteA -> RewriteA`

At first glance this might sound simple, but there are a number of issues. Most notably, what if the children have distinct types from each other and their parent? How should such a combinator be typed? This isn`t an issue in this simple Fibonacci example, as there is only one type (`Arith`

), but in general you could have an AST with multiple mutually recursive non-terminals. KURE solves this by constructing a sum data type of all non-terminals in the AST, and having traversal combinators operate over this data type (using Associated Types to specify the sum type for each non-terminal). This is the most significant feature of KURE, but it`d take too long to explain the details here. You can read about it in either of the following papers:

- A Haskell Hosted DSL for Writing Transformation Systems
- Introducing the HERMIT Equational Reasoning Framework

Using the `anyR`

combinator (amongst others), KURE defines various traversal strategies (we just give the specialised types here):

```
anybuR :: RewriteA -> RewriteA
anybuR r = anyR (anybuR r) >+> r
innermostR :: RewriteA -> RewriteA
innermostR r = anybuR (r >>> tryR (innermostR r))
```

`anybuR`

traverses a tree in a bottom-up manner, applying the rewrite to every node, whereas `innermostR`

performs a fixed-point traversal, continuing until no more rewrites can be successfully applied. For example, we can define an evaluator for `Arith`

using this strategy:

```
evalR :: RewriteA
evalR = innermostR (arithR >+> fibLitR)
```

KURE 2.0.0 is now available on Hackage.

You can find this Fibonacci example, and several others, bundled with the source package. For a non-trivial example, KURE is being used as the underlying rewrite engine for the HERMIT tool. HERMIT hasn`t been released yet, but you can read about it in this paper:

Introducing the HERMIT Equational Reasoning Framework

The paper also describes how KURE uses lenses.

]]>Imagine you are sitting at your terminal wishing your Haskell program would go faster. The optimization flag has been turned on, and you know of an unimplemented transformation that could help. What do you do? You could add a new optimization pass to GHC, taking part in the dark art of tuning heuristics to allow it to play well with others. Or you could experiment, using HERMIT.

As an example we use the Fibonacci function, not because it is interesting, but because it is so well known.

```
module Main where
fib :: Int -> Int
fib n = if n < 2 then 1 else fib(n-1) + fib(n-2)
```

Compiling with **-O2**, and using Criterion to average over 100 tests, we observe that **fib 35** runs in 124.0ms±2.6ms on our development laptop.

To enable further optimization of **fib**, we want to try unrolling the recursive calls. Like the GHC optimizer, we want to do this without changing the source, which is clear and concise.

To do so, we fire up HERMIT, choosing to use the command-line interface (because the GUI version is not written yet). HERMIT uses the GHC plugin mechanism, to insert itself into the optimization pipeline as a rather non-traditional compiler pass, capturing programs mid-compilation and allowing the user to manipulate them.

ghc -fplugin=Language.HERMIT.Plugin -fplugin-opt=Language.HERMIT.Plugin:main:Main:mode=i -fforce-recomp -O2 -dcore-lint Main.hs [1 of 1] Compiling Main ( Main.hs, Main.o ) ... module main:Main where fib ∷ Int -> Int ... hermit> _

GHC has compiled our program into its intermediate form, called GHC Core, and HERMIT is asking for input. At this point we can start exploring our captured program. There are several bindings, but let us consider just **fib**.

hermit> consider 'fib rec fib = λ n → case (<) ▲ $fOrdInt n (I# 2) of wild False → (+) ▲ $fNumInt (fib ((-) ▲ $fNumInt n (I# 1))) (fib ((-) ▲ $fNumInt n (I# 2))) True → I# 1

The **consider** command moves the focus onto the specified function, and the REPL displays the function’s abstract syntax. This is HERMIT’s “compact” pretty printer for GHC Core. Core programs necessarily contain a substantial amount of information, and in this render engine HERMIT uses ▲ as a placeholder for type arguments. Notice that we refer to a Haskell syntactical name using the Template Haskell convention of prefixing with a tick.

Next we want to unroll **fib**, to see if this improves performance. Specifically, we want to inline all calls to **fib** (but nothing else) in a bottom-up manner. HERMIT uses strategic programming as its basis, so we build a strategy using two HERMIT functions.

hermit> any-bu (inline 'fib) rec fib = λ n → case (<) ▲ $fOrdInt n (I# 2) of wild False → (+) ▲ $fNumInt ((λ n → case (<) ▲ $fOrdInt n (I# 2) of wild False → (+) ▲ $fNumInt (fib ((-) ▲ $fNumInt n (I# 1))) (fib ((-) ▲ $fNumInt n (I# 2))) True → I# 1) ((-) ▲ $fNumInt n (I# 1))) ((λ n → case (<) ▲ $fOrdInt n (I# 2) of wild False → (+) ▲ $fNumInt (fib ((-) ▲ $fNumInt n (I# 1))) (fib ((-) ▲ $fNumInt n (I# 2))) True → I# 1) ((-) ▲ $fNumInt n (I# 2))) True → I# 1

Each inner instance of **fib** has been replaced with its definition, effectively unrolling **fib**. There are many more simplifications that can be done, such as β-reduction, but GHC is good at this. For now, we just want to test this version; so we type **exit** and GHC resumes compiling our program.

Now we time this new version. Again, taking the average over 100 tests using Criterion, we observe that the post-HERMIT **fib 35** runs in about 76.7ms±3.6ms. We have modified GHC’s internal representation of our program mid-compilation, improving it such that the resulting program runs measurably faster. Furthermore, we did not need to alter the original program to do so.

If you are interested in learning more about HERMIT, we’ve just submitted a paper to the Haskell Symposium, called Introducing the HERMIT Equational Reasoning Framework, written by Andrew Farmer, Ed Komp, Neil Sculthorpe, and myself. A first release of HERMIT will be available via hackage in the near future.

Andy Gill

]]>Reification is the observation of underlying structure. One reason for reification is to cross-compile a Haskell expression to execute on a different platform. This has been done many times, and a great reference for this is Conal Elliott, Sigbjorn Finne and Oege de Moor’s Compiling Embedded Languages

The trick is to build a structure when a direct interpretation would be typically used. A canonical example is arithmetic.

```
data Expr where
Lit :: Int -> Expr
Add :: Expr -> Expr -> Expr
instance Num Expr where
e1 + e2 = Add e1 e2
fromInteger i = Lit (fromInteger i)
```

Now we do things like

```
GHCi> 1 + 2
Add (Lit 1) (Lit 2)
```

and get the *structure* of the computation of the expression, not just the value. This is called a deeply embedded language; the structure is deeply embedded inside the expression.

From here we can use this structure to, among other things, compile and execute this expression in a different setting. In “Compiling Embedded Languages”, the target was executing C graphics code.

It is possible to reify a function. The trick here is to provide a prototypical version of the input argument, and observe how it used in the result.

```
data Expr where
-- new constructor
Var :: String -> Expr
-- as before
Lit :: Int -> Expr
Add :: Expr -> Expr -> Expr
reifyFn :: (Expr -> Expr) -> Expr
reifyFn fn = fn (Var "x")
```

Now we can reify the function.

```
> let f x = x + 2 :: Expr
> reifyFn f
Add (Var "x") (Lit 2)
```

We use this trick extensively in Kansas Lava, and other DSLs.

Providing a deep embedding of a Monad is straightforward. Consider:

```
data M :: * -> * where
Return :: a -> M a
Bind :: M a -> (a -> M b) -> M b
GetChar :: M Char
PutChar :: Char -> M ()
```

The issue is how do we reify `Bind`

? Or more specifically, how do we provide the prototypical variable of type ‘a’, to reify the 2nd argument of `Bind`

? For a long time, I assumed this was not possible, without a post-hoc constraint on Bind for a type that provided the appropriate `Var ".."`

.

However, there is a way of doing this, by normalizing structure of the monad. This trick was introduced by Chuan-kai Lin in Unimo and is used by the Heinrich Apfelmus in his `operational`

hackage package. We work with operational, because (1) Heinrich Apfelmus contacted us, pointing out that his library could simplify our ad-hoc unrolling mechanism, and (2) it was available on hackage.

`operational`

uses the left identity and associativity monad rules to normalize a monadic program into a stream of primitive instructions terminated by a `return`

.

```
Program ::= Primitive >>= Program
| return a
```

Using `operational`

is easy, you define the primitive(s), and then you can `view`

your program.

```
import Control.Monad.Operational
data MI :: * -> * where
GetChar :: MI Char
PutChar :: Char -> MI ()
-- bind and return provided by operational
type M a = Program MI a
compile :: M a -> ...
compile = eval . view
where
eval :: ProgramView MI a -> ...
eval (PutChar ch :>>= g) = ...
eval (GetChar :>>= g) = ...
eval (Return b) = ..
```

This effectively gives you a deep embedding. Neat stuff.

Javascript implementations on all major browsers provide a powerful API for building interactive web pages. We want to use Javascript libraries, but program in Haskell, by using a Javascript monad.

A usable model can be built using a simple translation of a fixed set of function calls into Javascript commands. With careful construction, we can combine commands before sending them, optimizing network usage. The challenging part is having the Javascript return values in an efficient manner. Consider this Haskell code:

```
c <- getContext "my-canvas"
... some use of c ...
```

In a simple transaction model, `getContext`

invokes a Javascript command on the client, returning the response as `c`

. However, we would prefer the whole code fragment to be compiled to Javascript such that the binding and use of `c`

are performed on the client directly, with no intermediate client<->server communication. And thanks to the ideas inside Unimo and `operational`

we can!

We do this by constraining the returned values of all the primitives to be reifiable via a constraint on GADT constructors. In (the simplified version of) our Javascript compiler, Javascript function calls are implemented with the `JS_Call`

primitive.

```
data JSInst :: * -> * where
JS_Call :: (Sunroof a) => String -> [JSValue] -> JSInst a
...
```

This is the key step, the `Sunroof`

constraint provides the ability to generate a prototypical `a`

. The Unimo trick works for constraint types as well as monomorphic types.

So, from our list of primitives, the `operational`

package allows us to build our Javascript monad, with the monad instance for `JSM`

is provided `Program`

.

```
type JSM a = Program JSInst a
```

For technical reasons, `Program`

is abstract in `operational`

, so the library provides `view`

to give a normalized form of the monadic code. In the case of `JS_Call`

, bind corresponds to normal sequencing, where the result of the function call is assigned to a variable, whose name has already been passed to the rest of the computation for compilation. `newVar`

, `assignVar`

and `showVar`

are provided by the `Sunroof`

class.

```
compile :: Sunroof c => JSM c -> CompM String
compile = eval . view
where
showArgs :: [JSValue] -> String
showArgs = intercalate "," . map show
eval :: Sunroof b
=> ProgramView JSInst b -> CompM String
eval (JS_Call nm args :>>= g) = do
a <- newVar
code <- compile (g a)
return $ assignVar a ++ nm ++ "("
++ showArgs args ++ ");" ++ code
...
eval (Return b) = return $ showVar b
```

This critically depends on the type-checking extensions used for compiling GADTs, and scales to additional primitives, provided they are constrained on their polymorphic result, like `JS_Call`

.

Using `compile`

, we compile our Sunroof Javascript DSL to Javascript, and now a bind in Haskell results in a value binding in Javascript. A `send`

command compiles the Javascript expressed in monadic form and sends it to the browser for execution.

```
send :: (Sunroof a) => JSM a -> IO a
```

The Javascript code then responds with the return value, which can be used as an argument to future calls to `send`

.

We can write a trivial example which draws a circle that follows the mouse:

```
drawing_app :: Document -> IO ()
drawing_app doc = do
...
send doc $ loop $ do
event <- waitFor "mousemove"
let (x,y) = (event ! "x",event ! "y")
c <- getContext "my-canvas"
c <$> beginPath()
c <$> arc(x, y, 20, 0, 2 * pi, false)
c <$> fillStyle := "#8ED6FF"
c <$> fill()
```

The following code is generated by Sunroof (on the Haskell server) and then executed entirely on the client:

```
var loop0 = function(){
waitFor("mousemove",function(v1){
var v2=getContext("my-canvas");
(v2).beginPath();
(v2).arc(v1["x"],v1["y"],20,0,2*Math.PI,false);
(v2).fillStyle = "#8ED6FF";
(v2).fill();
loop0();
})
}; loop0();
```

Volia! A Haskell-based Javascript monad reified and transmitted to a browser.

This blog article is adapted from the short paper Haskell DSLs for Interactive Web Services, submitted to XLDI 2012, written by Andrew Farmer and myself.

The lesson, I suppose, is never assume something is not possible in Haskell. We only stumbled onto this when we were experimenting with a variant of monadic bind with class constraints, and managed to remove the constraints. We’ve never seen an example of using `operational`

or Unimo that constraints the primitives to be able to generate specifically a prototypical value, aka the function reification trick above. If anyone has seen this, please point it out, and we’ll be happy to cite it.

We would like to thank Heinrich Apfelmus for pointing out that we could rework our compiler to use `operational`

, and providing us with suitable template of its usage.

Let the reification of monads begin!

Andy Gill

]]>Kansas Lava is Haskell library for simulating hardware and generating VHDL, in the spirit of Chalmers Lava and Xilinx Lava. We make extensive use of associated type functions, and other GHC extensions.

Kansas Cores are a set of libraries that use Kansas Lava to various build hardware components like UARTs and an LCD driver, as well as a monad and simulator for the Spartan3e.

Both are now available on hackage.

Kansas Lava has various online resources. The Kansas Lava webpage is

A video walkthrough of our tutorial has been started, and is available on youtube.

The website for the functional programming group at KU is

Kansas Lava Team @ KU

]]>- Improving the Presentation of Technical Material in Video Talks using Post Production. In this paper, we present our experiences using our image processing toolkit ChalkBoard and other video processing tools to post-process a pre-recorded conference talk. With inexpensive video cameras, video services like youtube.com and vimeo.com, and widely available and inexpensive video editing software, we expect this new media to be increasingly used as a mechanism to both promote research agendas and deliver technical content. In order to explore the use of such media in more detail, the Functional Programming group at KU recorded a technical talk and experimented with various post-processing tricks to enhance the value of the talk. Specifically, we fixed a common lensing issue in software, added small animations and pictures which matched the gestures of the actors, improved the visual quality of the slides being talked to, and experimented with a post-hoc zoom. Overall, the post-processing stage took considerably longer than anticipated, but did add perceivable value and impact to the final video.

We also had to work hard at making a LaTeX document look like a Word document.

Comments always welcome!

FPG@KU

]]>- Capturing Functions and Catching Satellites, Andy Gill and Garrin Kimmell. In this paper, we describe the modeling of the simulation environment used when running the ICFP 2009 programming contest, with a focus on the compilation and testing infrastructure for the generated binaries for our virtual machine. This infrastructure makes novel use of an implementation of a deeply embedded Domain Speciļ¬c Language (DSL) within Haskell.
- Types and Type Families for Hardware Simulation and Synthesis: The Internals and Externals of Kansas Lava, Andy Gill, Tristan Bull, Andrew Farmer, Garrin Kimmell, Ed Komp. In this paper, we overview the design and implementation of our latest version of Kansas Lava. Driven by needs and experiences of implementing telemetry circuits, we have made a number of recent improvements to both the external API and the internal representations used. We have retained our dual shallow/deep representation of signals in general, but now have a number of externally visible abstractions for combinatorial, sequential, and enabled signals. We introduce these abstractions, as well as our new abstractions for memory and memory updates. Internally, we found the need to represent unknown values inside our circuits, so we made aggressive use of type families to lift our values in a principled and regular way. We discuss this design decision, how it unfortunately complicates the internals of Kansas Lava, and how we mitigate this complexity.
- Every Animation Should Have a Beginning, a Middle, and an End, Kevin Matlage, Andy Gill. Animations are sequences of still images chained together to tell a story. Every story should have a beginning, a middle, and an end. We argue that this advice leads to a simple and useful idiom for creating an animation Domain Specific Language (DSL). We introduce our animation DSL, and show how it captures the concept of beginning, middle, and end inside a Haskell applicative functor we call Active. We have an implementation of our DSL inside the image generation accelerator, ChalkBoard, and we use our DSL on an extended example, animating a visual demonstration of the Pythagorean Theorem.
- What’s the Matter with Kansas Lava?, Andrew Farmer, Garrin Kimmell, Andy Gill. In the course of attempting to generate ever larger circuits, we have found the need to effectively test and debug the internals of Kansas Lava. This includes confirming both the simulated behavior of the circuit and its hardware realization via generated VHDL. In this paper we present the notion of probes, which can observe both intermediate values and functions, without changing the circuit interface. We then describe several debugging tools built using probes.

We have also been working to improve our web presence, and are pleased to announce our new web pages:

Comments always welcome!

FPG@KU

]]>-Kevin Matlage

Hood is based on the concept of observation of intermediate data structures, rather than the more traditional stepping and variable examination paradigm used by imperative language debuggers. It can observe base types (Int, Bool, Float, etc.), finite and infinite structures (lists, trees, arrays, etc.), functions, and monadic actions. Notably, Hood preserves the type and strictness properties of the functions under observation, meaning your program can be debugged without affecting its semantics. Best of all, Hood can be extended to observe custom data types with simple instance declarations.

You can find more information, including examples and a tutorial, on the new Hood webpage:

http://www.ittc.ku.edu/csdl/fpg/Hood

We hope you find it useful!

Andrew Farmer, Andy Gill