Image courtesy of https://www.flickr.com/photos/waltstoneburner/5745387762/
It's about time we wrapped up this short series on sculpting code in the REPL. In part one we discussed the idea of a tool that would provide us with feedback while we worked in a REPL. The REPL was our potter's wheel turning the lump of clay that is the code we want to build and the tool we were looking for was a digital stand-in for our hands shaping the code into something well formed and usable. In part two we looked at a simple implementation of an API that let us experiment in the REPL without too much ceremony and that supports the somewhat different workflow that working in a REPL can offer. Now let's take a look at possible enhancements that would build on and improve the experience even more.
I was going for the simplest thing possible and I wound up printing textual results to the console of the REPL. That works for me right now but it would be really nice if there was a bit more UI thought put into it. I would like to skip the default answer of "let's hook this up to a test runner" since the semantics of an experiment are unique enough that it would impeded the experience. Having a more graphical interface could prove useful as I could track fun things like the number of times an example fails before it succeeds or the time from start of the experiment to reaching a conclusion. The idea is that the work in the REPL starts to feed into other more analytical components so you can get feedback about your process and potentially use that to work more effectively. Who knows what kind of useful patterns you might find by capturing data about your work while you work!
Formalized peer review? I thought we didn't want that. Well, maybe it's not such a bad idea. What started out as a tool for me to play with ideas in the REPL could morph into a tool to share that fun with others. They could tweak and test in their REPL and refine my experiment into something more production worthy or come up with a completely different solution that satisfies my hypothesis in a way I never would have thought. Having a solid protocol for transparent experimentation could be an interesting way to reduce the wasted cycles in your team as they all gain confidence in a solution to a technical problem. Something like JSFiddle or Try F# but encouraging comparing different approaches would be interesting to build.
There are definitely other improvements that could be made ot the experiment API that I haven't thought of but let's switch gears and see if we can come up with some enhancements to the REPL itself.
Let's say I've been using the REPL to define functions all day. Maybe the function I forgot the code for is buried all the way in this morning's work. It would be nice if I could retrieve the source code that defines the function from the depths of the REPL session.
For example)
Sometime in the morning I defined the following function:
let someFun x = x * x
I have been using it all day in another function but now I want to get the source back for the function so I can add it to the application I am building. It would be great if I could do something like this:
> getSourceDef someFun;;
- getSourceDef f -> string : returns the source code that was used to create the function or object
Function "someFun" is defined as:
let someFun x = x * x
Another idea that comes to mind is finding usages of a function or classs in the REPL session. Maybe there is some interesting code that uses the someFun function but I forget what all of it was.
For example)
Given the same someFun function declared we would want to find all the source code that refers to that function in the REPL session.
That might look like:
> findUsagesOf someFun;;
- findUsagesOf f -> list<t> : - get a list of other functions or objects that calls function or object in the current REPL session
[myAvg; someBizCalcs;]
Chris Granger had some great ideas in the IDE he built that was a bulked up REPL with immediate feedback. I think a happy middle ground between his fully automated version and the current version of most REPLs would have some hooks for responding to events coming out of the REPL.
The main use case I have in mind is to re-run the experiment when a function definition is updated. Maybe I've been drinking too much from the stream of Functional Reactive Programming (FRP) but being able to handle events and trigger some custom behavior based on them would go a long way to having a more extensible and responsive REPL.
Being able to easily do something like this would be a huge win:
> onchange someFun (fun f -> match runExperiment with
| Passed -> getSourceDef f |> saveCode
| Failed -> playSadTromboneSound)
Now when the someFun function is changed my experiment can automatically be re-run and if it passes then we can grab the code for the function we are testing and store it somewhere. If it fails we can play a sad trombone sound to let me know that I need to keep working on the experiment. Having the REPL talk back and tell you what it did would go a long way to creating a more interactive environment and a more reactive experience.
There are many possibilities beyond the few I talked about here. If you have any ideas feel free to add them in the comments. The future is out there waiting for you to build it. Happy dreaming!