Mainly a bugfixing release – the one big news is that from this release onwards we are shipping statically linked builds, which should help with install issues and compatibility on a wider range of (Linux) systems.
As always, I’d like to extend a big thank you to all of our contributors for all the improvements!
See the complete changelog and credits here:
]]>Edit: Hilariously, bytevectors didn’t win me as much extra performance as I expected.
]]>With that in mind, I had an interesting idea today.
A module akin to tidy-codegen that could be executed on one’s codebase as a post-compilation step to obtain Unison-like features in Purescript without any additional cost.
Hashes derived from the abstract syntax tree in Purescript. I looked further into the idea and it’s not impossible and with my recent Chez Scheme work on the sha3 module, I’ve already tread some of the path toward that.
Research (please feel free to correct me if I’m mistaken on anything):
PureScript’s compiler already produces CoreFn as a serialized JSON intermediate representation (the same IR that purescm consumes). CoreFn is a desugared, simplified AST that strips away syntactic sugar, do-notation, type class dictionaries are made explicit, etc. It’s already quite close to the “structural essence” of each function.
Each module’s CoreFn output lives in output//corefn.json and contains every top-level binding with its full expression tree. This is my ideal input!
Steps to bring Unison functionality into Purescript:
Easy wins:
Structural identity: renaming a function or its local variables doesn’t change the hash. Only actual logic changes do.
Dependency-aware hashing: if a helper function changes, all callers’ hashes change too (because their normalized AST embeds the helper’s hash, not its name).
Incremental builds: we could potentially find places where it would be advantageous to skip recompilation/analysis of anything whose hash hasn’t changed, similar to how Unison’s scratch files work.
AI token compression: Share the code once, then reference by hash. One could maintain a local hash→code database and even have a convention where they’d send an LLM a manifest at the start of a conversation.
MUCH harder to achieve:
Unison’s type-aware hashing considers types as part of identity. CoreFn has type annotations but they’re not always fully elaborated. We could include them or exclude them depending on whether we want Int → Int and Number → Number functions with identical bodies to hash differently.
Unison also handles structural type equivalence: record field reordering, etc. CoreFn is more rigid here, so we’d need to sort record fields by name during normalization if we want that.
Edit: and here’s the pull request
I’d be honored if someone would give this a look and (if necessary) brutally dunk on me if this seems like a frivolous or unneeded change. ![]()
Edit: Glad to get a reply so quickly. I’ll instead be building this as a module/add-on to help make this the canonical way that we wire into bytevectors via Purescript FFI.
]]>I was able to squeeze a LOT more performance than is even possible using the NodeJS backend.
A comparison against JS implementations on the same machine:
| Implementation | SHA3-256 1 MiB |
|---|---|
| Chez Scheme FFI (my chez-scheme branch) | 50.4 MB/s |
| js-sha3 (fully unrolled JS) | ~48 MB/s |
| Node.js FFI (my main branch) | 28.1 MB/s |
| noble/hashes (loop-based JS) | ~18 MB/s |
Benchmarking led me to find a TON of performance in both back ends.
I eventually molded the Chez Scheme version (with some major nix surgery to provision the latest purescm) to utterly trounce the JavaScript version.
The Chez-Scheme part was a fun experiment.
]]>Mainly, I wanted to see how (more knowledgeable and experienced) people would go about it since there’s so many ways that could happen.
Honestly, the only reason I’m not going full-throttle toward learning Unison (or trying to use it like I’m implying) is their discord-centered community where questions get buried never to be seen again after like an hour.
I do think it is fertile ground, though. Our Chez Scheme back end is another example of great cross pollination. There’s so many elegant ideas in Unison that I’d like to start using either it or Chez-Scheme for back ends.
]]>As they say, “never roll your own crypto”. So, I really want the community to test this module and mercilessly critique/refactor it if needed. All of my test cases pass but I don’t know what I don’t know. Maybe we can build hundreds more test cases.
Maybe there’s some edge case in there that makes this lib DANGEROUS in some way. Thats what I’m reaching out to hopefully prevent.
Here’s a link. Don’t worry about hurting my feelings. Be ruthless.
Disclaimer. Yes I know that sha3 probably isn’t quite ready for production use and has yet to be fully verified by the cryptographic community.
]]>Spago 1.0.0 has been released and the registry is now in general availability as of Feb 1, 2026 ![]()
A few things to note about how all of this affects different package managers:
Spago@next
If you are a spago@next user then your builds will continue to work. However, if you want to use the spago publish command you will need to update to [email protected] to integrate with the new infrastructure.
We will no longer be publishing the next tag and publishing has moved to use the latest tag.
Legacy package managers (Bower, legacy Spago)
If you use legacy Spago (you have a spago.dhall file) or you use Bower, then you can continue to publish your packages to the registry for the time being. When the registry produces a new package set we still produce a packages.dhall for it and write that to the registry repository. This “legacy” support will continue for a minimum of 6 months, but after that the registry maintainers may remove support for legacy package managers and package sets.
Nix
The purescript-overlay has been updated to support spago 1.x.x as the default entry for the spago package, so you no longer need to refer to spago-unstable if you are using the Spago rewrite.
If you use Nix with registry hashes to build your packages (for instance, you use mkSpagoDerivation), then your builds will fail until you regenerate your spago.lock files. You can regenerate them with a call to spago build which will detect your out-of-date files and regenerate them. The mkSpagoDerivation function has been updated to work with the current registry.
The registry itself is built with Nix, so you can see this pull request as an example of what it looks like to fix your Nix build.
Not all package versions that existed in Bower have been included in the registry. The registry only includes packages which satisfy the registry requirements, especially that they compile with at least one compiler from 0.13.0 onwards.
The old new-packages.json and bower-packages.json files which used to enumerate the packages in the registry have been moved into the registry in the archive directory. We have also added a new file which lists every version of these packages which did not get registered along with the specific error it encountered.
If you are surprised to find a package or package version is not in the registry, and the error listed in removed-packages.json is unclear, please open an issue so we can help diagnose and fix any mistakenly omitted package versions.
Thanks for holding on through the disruption this weekend! If you have any issues using the registry you may create a relevant issue in one of these locations:
Thanks!
@thomashoneyman and @f-f
As with many cross-language integrations I think the main problem will be adapting the type systems to each other, depending on what you had in mind.
]]>Whenever I’ve had the time and inclination to work on it I’ve generally spent time trying to figure out something I consider to be a pretty-terrible-but-work-aroundable bug (1, 2, 3), so until that gets resolved I’m not really thinking about other improvements.
If an issue/PR was opened about some definite problem or thing that needs addressing I’d be responsive, but part of the reason it’s not getting much attention is there isn’t much wrong with it either, or not that can be solved without larger design changes perhaps.
Next changes in the repo will probably just to upgrade spago actually.
]]>Is Halogen really now unmaintained?
. That’s a shame if that’s true.
It’s the standard in this ecosystem for a reason.
]]>Anyway, I’ve always had my eye on Unison for the back end…and as it hit 1.0 recently, I looked into the project. Picture that meme of the man walking with his girlfriend and looking back at another woman. That woman is Unison perhaps. ![]()
Anyway, I started to look into porting my back end into Unison and it doesn’t actually seem that bad. However, they (unlike us) don’t seem to have much in way of strong front end libraries available. Is that an opportunity for cross-pollination? Or are we too different for that to be possible? I’m pretty sure that’s the case but both of our syntax are MLesque…
What do the Purescript community and devs think of this project? And what do you think are the ramifications of it? Do you think there’s any opportunity for wiring our front end tools to the Unison ecosystem?
I’ll keep building with the tools I have (and I sincerely love Purescript) but I’ll always have an eye toward new ideas…and Unison is surely one. Or perhaps an old idea that we haven’t seen done quite like that…
Edit: I stumbled across a fairly pertinent video today. It seems that I wasn’t completely out of left field with this idea since it is mentioned in the video that one would still need JavaScript to use Unison for browser UI’s.
]]>@thomashoneyman and I have been working day and night over the last few weeks to finally wrap up the Registry project and get the Registry and Spago out of alpha state. We are aiming to do all of that over the next few weeks.
The PureScript Registry will leave alpha and move to GA.
There is one last breaking change that we need to ship, and since many of you are already using the new Registry since a while (through spago@next), we are trying to cause the least amount of disruption possible.
A summary of what’s happening under the hood: the new patch will enforce that all package versions solve and compile with at least one version of purs from 0.13 onwards. This change revealed some issues in how we computed dependency bounds for legacy packages. Specifically, fragmentation in the ecosystem led many packages to have a bower.json file, spago.dhall/packages.dhall file, and package-sets repository entry, often with no clear “canonical” manifest to choose among the three; the bounds we discovered for packages like these were overly-restrictive and caused several working packages to fail in solving or compilation.
We’ve reworked how we detect dependency lists and compute bounds when they are ambiguous, and things are working much better. We also now know what compiler versions each package version is compatible with – information we can push into Pursuit’s search functionality. However, this change modifies the contents of the package archives, and therefore we must reupload all package versions.
Not all package versions which existed in Bower will continue to be included in the registry. Specifically, any package version which cannot be solved / compiled with a compiler from 0.13.0 onwards (~early 2019) will be dropped from the registry. The full list of package versions that are present in Bower, GitHub, and all other sources but which won’t be in the registry are here.
At the same time Spago@next will also leave alpha and enter GA.
We will have a 1.0 release come out once the reupload has completed. spago publish will be broken on any version before this latest one, so we recommend you upgrade as soon as possible. The lockfile format is also changing, so expect to regenerate that; Nix users in particular will be affected and should upgrade and take note of the updates to the lockfiles.
Apart from the breakage to spago publish, previous versions of spago@next and spago-legacy should keep working as usual.
The Registry will be in an uncertain state (possibly down, possibly broken at times) from Friday 2026-01-30 18:00 UTC to Sunday 2026-02-01 22:00 UTC
We aim to have the reupload happen as soon as reasonably possible during the maintenance window, so that we can have some time to fix it up before everyone is back at the desks on Monday. We might be done quickly, or not, so this is all to say that if you can schedule your builds outside of these hours, then it might be a good idea to do so ![]()
We will keep you posted on this thread on where we are with that, and please do report issues in here as well.
Fingers crossed, and I hope to be reporting some good news to you all soon ![]()
I implemented several type checker features in the past few weeks, namely constraint solving, compiler-solved type classes, and instance deriving. The analyzer playground now also supports loading packages from the package set. Check it out here, please try to break the type checker as much as possible!
https://purefunctor.github.io/purescript-analyzer/
In the next few months I’ll be focusing on implementing type checking parity for core packages, then once some things are ironed out I’ll start introducing type information to the LSP server.
]]>The motivation is to make a generator for Selda, Simple.JSON, etc.
It is a parser. I also include additional mechanism for custom attribute outside Prisma.
in this case, we can have:
idBack String //_ @type("Either String Base64") // this is a real comment
I invite all suggestion for improvement.
I hope I can use this library as a means to learn deeper.
I still feel that there should be more concise solution here.
Thank you for your kind supports.
]]>Transit lets you specify state machines using a type-level DSL. The compiler then ensures your implementation is complete and correct—no missing cases, no invalid transitions, no documentation drift.
Here’s a simple door state machine:
-- Define your state machine specification at the type level
type DoorTransit =
Transit
:* ("DoorOpen" :@ "Close" >| "DoorClosed")
:* ("DoorClosed" :@ "Open" >| "DoorOpen")
-- Write the update function - the compiler checks it matches the spec!
update :: State -> Msg -> State
update = mkUpdate @DoorTransit
( match @"DoorOpen" @"Close" \_ _ ->
return @"DoorClosed"
)
( match @"DoorClosed" @"Open" \_ _ ->
return @"DoorOpen"
)
If you forget to handle a transition or add one that’s not in your specification, you’ll get a compile error.
Transit supports transitions with multiple possible outcomes:
type CountDownTransit =
Transit
:* ("Idle" :@ "Start" >| "Counting")
:* ("Done" :@ "Reset" >| "Idle")
:* ( "Counting" :@ "Tick"
>| "Counting" -- continue counting
>| "Done" -- or finish
)
update :: State -> Msg -> State
update = mkUpdate @CountDownTransit
-- ... handlers ...
( match @"Counting" @"Tick" \state _ ->
if (state.count - 1) == 0
then return @"Done"
else return @"Counting" { count: state.count - 1 }
)
The compiler verifies that you only return states listed in your specification.
spago install transit
If you’re familiar with Servant from Haskell, Transit follows a similar philosophy: just as Servant uses a type-level REST API specification to ensure type-safe routing and generate OpenAPI docs, Transit uses a type-level state machine specification to ensure type-safe update functions and generate state diagrams.
I’d love to hear your thoughts, questions, or feedback!
]]>Part 1, intro and repo setup includes
As usual, I’m not offended if no one engages, but I thought I’d post it here anyway in case anyone’s interested in learning more about writing PureScript backends, understanding CoreFn, power of Lisp macros, and seeing how suitable functional programming actually is for modern GPU programming.
]]>