Haskell Community - Latest posts https://discourse.haskell.org Latest posts Try Sabela Reactive Notebooks Oh. That one doesn’t work cause I forgot to package the data file and the error messaging still needs work. It’s a port of this Marimo notebook:

]]>
https://discourse.haskell.org/t/try-sabela-reactive-notebooks/13811#post_3 Tue, 17 Mar 2026 02:01:07 +0000 discourse.haskell.org-post-54236
Try Sabela Reactive Notebooks I’m trying it, but just see “installing: containers, dataframe, text, vector” in the top right corner (now for about 15 mins), when I try abantu.md.

Also as a linguist, the Bantu language notebook looks so cool. Very excited to get this running!

]]>
https://discourse.haskell.org/t/try-sabela-reactive-notebooks/13811#post_2 Tue, 17 Mar 2026 01:56:14 +0000 discourse.haskell.org-post-54235
Try Sabela Reactive Notebooks https://lqjh9lqov3.execute-api.us-east-1.amazonaws.com/start

A fair amount of development has gone into the widgets and the rich text output. Startup time are slow (1 - 2 minutes). Please try it out and file any bugs in the repo:

]]>
https://discourse.haskell.org/t/try-sabela-reactive-notebooks/13811#post_1 Tue, 17 Mar 2026 01:07:35 +0000 discourse.haskell.org-post-54234
Sneak Peek: Bolt Math No worries!

I don’t think we are actually in any sort of disagreement, and so I feel the need to clear some air first - I 100% absolutely love the questions, and give your reaction I think may have come across as a fair bit sharper than I intended (I was going for short, I am almost always too long-winded) so apologies for that, and thank you for not returning the favor!

I actually want to thank you for being clear on this because I could not tell what you were asking (this is not meant as disrespect, this is on me if you’ll allow me to explain shortly). I restated my wedge argument because I was not sure.

Actually, precisely! Minor correction, wedge product generalizes multiplication, so *, but yes, because scalar multiplication actually is the special case of wedge products for scalars (because they are grade 0 vectors). Its not because I really want to, its because it really is.

Subtraction has the same thing going on, with affine vs vector spaces and diff :: p -> p -> v: Not an act, not homogenous - but definitely a specialization of subtraction, for which the symbol is appropriate. The notational flexibility (thank you for that phrase) allows me to express this, and it really helps minimize the number of symbolic operators, which for me is extremely important


I love it here because people are kind and welcome, so I feel safe talking about this, because the topic is math communication which makes it oh so relevant:

If you read any of my older writing, you may have notice that my writing style is, well, let’s just say extremely odd :slight_smile: I have been working hard on improving, though.

To sum up a lot rather quickly, I am actually partially dyslexic, and I struggle to read math notation; or more precisely, Tachyphemia is a neurological disorder known to most people as ‘dyslexia’, except it also can include other things:

  • Cluttering (word order, speaking and parsing speech)
  • Dysgraphia (writing and fine motor skills)
  • Dyspraxia (coordination and gross motor skills)
  • Dyslexia (reading and writing)
  • Dyscalculia (math and number sense)

It doesn’t affect my intelligence - obviously, I have fantastic linguistics reading and math skills or I wouldn’t be here, but I can be very slow to write or to speak if I have not planned to, and in a great twist of irony, despite being strong at reading prose and doing math in my head, I am more or less symbol-blind* and can’t really read math notation or musical notation or the greek alphabet (not without some sort of legend or reference) which is probably why I got into programming over mathematics, where function names are usually words, and a program speaks for itself.

Since it does affect my ability to parse symbols, I do try to minimize their number and complexity, and to unify them and avoid making redundant operators wherever possible. This greatly colors my library design, but I do not think that it is a problem, since it not that I oppose the creation of convenience operators or anything, I just want to make sure my library works without them.


* I do better with shapes (form) than symbols (intent), and while there are pros and cons to that, I do get really annoyed by overdesigned UI because it quickly starts turning into visual noise for me and I can’t play a lot of games because of it - for instance, I love playing board games like Tokaido with friends, but I am constantly asking what things are because although it is beautiful, the heavily stylized artwork means I can’t tell what anything actually means.

]]>
https://discourse.haskell.org/t/sneak-peek-bolt-math/13766?page=2#post_22 Mon, 16 Mar 2026 22:58:48 +0000 discourse.haskell.org-post-54233
Sneak Peek: Bolt Math Adjoints, anyone?
Last time I was building something using units, I needed the arithmetic equivalent to symmetric set difference. Take the monoid operation • to be addition in a lattice of non-negative quantities (e.g. energy). This makes this Semiringresiduated.

Try to take amount y from amount x where both are non-negative. Conceptually, x-y is the unique number such that y+(x-y) = x, or, in the language of residuals, subtraction is right adjoint to addition in the lattice of numbers. If all quantities are non-negative (e.g. energy), the proper difference y-x must remain a pair and uniqueness is lost. We can, however, choose a canonical representative:

  • If y > x, then the result is the pair (0, y-x) signifying that there was more to take away than what was there.
  • If x > y then the result is the pair (x-y, 0) signifying what is left over after taking away some.

I have the impression that primary school kids use a related concept when doing arithmetic involving numbers larger than 10. The fundamental operation being to split a quantity (number) using a smaller one.

]]>
https://discourse.haskell.org/t/sneak-peek-bolt-math/13766?page=2#post_21 Mon, 16 Mar 2026 21:49:45 +0000 discourse.haskell.org-post-54232
How to practically enable `-Wmissing-import-lists`? This does not quite address your concerns exactly, but it is related. I maintain a compiler plugin called om-plugin-imports, which dumps a canonical list of imports that you can copy-paste into your module that satisfies -Wmissing-import-lists. (see the readme)

It could be easily modified to create a plugin-based solution for exactly what you want. i.e. you wouldn’t actually use -Wmissing-importlists in your codebase. You would instead use a modified version of this plugin, which gave you the exact behavior you were looking for (a compile error when there are any missing imports except Prelude).

-Rick

]]>
https://discourse.haskell.org/t/how-to-practically-enable-wmissing-import-lists/13810#post_7 Mon, 16 Mar 2026 21:48:09 +0000 discourse.haskell.org-post-54231
How to practically enable `-Wmissing-import-lists`? I tried this recently and it doesn’t work with “multiple home units” which is crucial for HLS to work smoothly in large projects, among other things.

]]>
https://discourse.haskell.org/t/how-to-practically-enable-wmissing-import-lists/13810#post_6 Mon, 16 Mar 2026 17:36:41 +0000 discourse.haskell.org-post-54230
How to practically enable `-Wmissing-import-lists`? I’m not sure; unfortunately I don’t get to play with mixins like this too often, I just wanted to make sure that rewriting module names from packages like this wasn’t left to the wayside.

Can you use internal cabal library packages?

]]>
https://discourse.haskell.org/t/how-to-practically-enable-wmissing-import-lists/13810#post_5 Mon, 16 Mar 2026 10:55:11 +0000 discourse.haskell.org-post-54229
How to practically enable `-Wmissing-import-lists`? I considered this too, but don’t you then have to create two packages? I guess you could make a private sublibrary, but it seems like more work. Perhaps you can use mix-ins to rename the normal prelude to avoid having to use package imports in your custom prelude module.

]]>
https://discourse.haskell.org/t/how-to-practically-enable-wmissing-import-lists/13810#post_4 Mon, 16 Mar 2026 10:53:02 +0000 discourse.haskell.org-post-54228
How to practically enable `-Wmissing-import-lists`? You could also use cabal’s mixins to replace prelude in another manner: 7. Package Description — Cabal 3.4.0.0 User's Guide

]]>
https://discourse.haskell.org/t/how-to-practically-enable-wmissing-import-lists/13810#post_3 Mon, 16 Mar 2026 10:44:43 +0000 discourse.haskell.org-post-54227
Announcing `crem`
marcosh:

Nice! I’ll try to take a look at your work and see if I can make it compile for the whole project.

The work I did got me like 99% of the whole module compiling. So, your work on an update is going to be really easy. I was about to submit a pull request but I hesitated because of that doctests literate Haskell part that confused me a bit. If you want, I can submit a pull request to get you most of the way there.

Also, I didn’t quite get around to adding the upper boundaries back into the cabal file. So, without those, I imagine it could break pretty easily until you put those in (or a cabal freeze).

]]>
https://discourse.haskell.org/t/announcing-crem/6012#post_12 Mon, 16 Mar 2026 10:35:02 +0000 discourse.haskell.org-post-54226
Announcing `crem` It definitely feels clunky compared to dependent types but it’s REALLY cool and futuristic for this ecosystem.

Absolutely! I had been hesitant for a while. Then, I was thinking about the guarantees that CREM could offer me and decided to use it. Thanks again for your brilliant work.

]]>
https://discourse.haskell.org/t/announcing-crem/6012#post_11 Mon, 16 Mar 2026 10:30:29 +0000 discourse.haskell.org-post-54225
Yet another opinion on LLMs · Hasufell's blog The one thing that sticks with me is how well I think the LLM performs always seems to correlate with my knowledge of the problem space. The better I understand the domain and technology that is being used the worse the LLM is. Maybe the things I understand well are things LLMs are bad at? I find that hard to believe though.

I also can’t figure out if I am more productive, I can certainly do more things, but am I more productive because of that? Does me churning out a new feature or developer tool fast bring a net gain? I have the firm belief that anything we create also introduces some debt that must be paid at some point and our job is to reduce that debt as much as possible. AI makes it very easy to just create and not think about technical debt.

I have always thought software engineering to be a crazy profession as we have no professional standards, and this AI train we are on makes it seem like that was very much by design. We can vibecode and sell software, and as long as it appears to do what the client wanted that’s good enough. I wish I could sell bridges built with the cardboard in my shed, well maybe I don’t.

]]>
https://discourse.haskell.org/t/yet-another-opinion-on-llms-hasufells-blog/13775?page=3#post_43 Mon, 16 Mar 2026 09:58:48 +0000 discourse.haskell.org-post-54224
Sneak Peek: Bolt Math
ApothecaLabs:

Also, the trivial example is wedging simple vectors: wedge :: v -> v -> Bivector v - not an act. Not a homogenous relation either. Is this a sufficient example?

I’m not claiming that all mathematical operators are either homogeneous or correspond to an action, that would be a big claim! That is, surely many operators have result types that correspond to neither of the input types.

Perhaps I would question whether you need to an abstraction that covers both addition on integers and the wedge product on vectors? Is the goal just to have a great degree of notational flexibility, so you can use + for the wedge product if you really want?

]]>
https://discourse.haskell.org/t/sneak-peek-bolt-math/13766#post_20 Mon, 16 Mar 2026 09:52:45 +0000 discourse.haskell.org-post-54223
Announcing `crem` yes! I initially created the library to try to fill the gap between a drawing of the design and its implementation.

If you want the original repository contains several examples (crem/examples/Crem/Example at main · marcosh/crem · GitHub) from really simple to more complex ones

]]>
https://discourse.haskell.org/t/announcing-crem/6012#post_10 Mon, 16 Mar 2026 09:18:53 +0000 discourse.haskell.org-post-54222
Announcing `crem` Nice! I’ll try to take a look at your work and see if I can make it compile for the whole project.

It’s awesome that you’re using crem for your project. Is it something that you plan to use in production?

]]>
https://discourse.haskell.org/t/announcing-crem/6012#post_9 Mon, 16 Mar 2026 09:15:46 +0000 discourse.haskell.org-post-54221
How to practically enable `-Wmissing-import-lists`? You could use an implicit custom prelude:

]]>
https://discourse.haskell.org/t/how-to-practically-enable-wmissing-import-lists/13810#post_2 Mon, 16 Mar 2026 08:16:50 +0000 discourse.haskell.org-post-54220
How to practically enable `-Wmissing-import-lists`? Edit:

On further thought I think I might be able to achieve the below using hlint importStyle directives, but that’s WIP. Will post results here. But I will leave the original post if people have a simpler way of working this out:

Original post:

So I’ve started working on a medium size Haskell codebase (high five figures in LOC) built using GHC and I’m working on making the warnings stricter to clean up the codebase and keep it clean. What I would like to do is enable -Wmissing-import-lists and naturally -Werror. I like to be able to at a glance be able to see where a function is coming from, and -Wmissing-import-lists achieves this, by requiring imports to be either qualified or explicit [1].

Here’s the issue. I have in the codebase a number of imports like the following:

import Prelude hiding (...)

-Wmissing-import-lists complains about this. Which I think is silly, because it doesn’t complain about an implicit Prelude import, despite the fact it has no import list, and import Prelude hiding (...) actually imports LESS than an implicit Prelude import.

This forces me to do explicit Prelude import lists anywhere where there’s a Prelude hiding clause.

Whilst I do think there’s too much in the Prelude, I think -Wmissing-import-lists forcing one to import Bool(True, False) just so one can hide something from the Prelude is a bit silly.

What I would probably ideally like to do is leave on -Wmissing-import-lists, also turn on NoImplicitPrelude, and write my own Prelude module (say MiniPrelude) that only exports a subset of Prelude, but allow that module, and only that module, to be imported without an import list.

I don’t really know what to do here. It seems like -Wmissing-import-lists, whilst a useful warning, is completely impractical unless one is okay with:

  1. Not being able to hide anything from the Prelude OR
  2. Having to explicitly list all Prelude imports, including things like Bool and ..

-Wmissing-import-lists even makes NoImplicitPrelude impractical, because wherever one uses their alternative Prelude one needs to import everything explicitly (when presumably one’s explicit prelude is carefully curated anyway).

I’ve even considered creating a CCP macro that spits out all the default imports I want from the Prelude in the form:

import Prelude (Bool(True, False), (.), ($), ...)

won’t work because then I’ll get -Wunused-import warnings everywhere.

Suppressing the warning by line would help, but apparently this isn’t possible, assuming this 21 year old ticket is still actually open.

Anyone have any idea how I can move forward here. I would really like to enable -Wmissing-import-lists, as it really helps me navigate and keep clean a currently messy codebase where the interdependencies are far from clear, but the resulting requiring of explicit Prelude exports and not allowing for a practical smaller custom Prelude is proving a very annoying showstopper.


  1. I know HLS when it’s working well can give you this information as well but it’s flaky and sometimes slow to load and adjust to refactors on large projects so I don’t want to rely on it. ↩︎

]]>
https://discourse.haskell.org/t/how-to-practically-enable-wmissing-import-lists/13810#post_1 Mon, 16 Mar 2026 03:09:22 +0000 discourse.haskell.org-post-54219
Yet another opinion on LLMs · Hasufell's blog FWIW you linked to a Reuters article about a METR study that suggested that use of AI slowed down developer productivity.

METR have more recently published an update that suggests things have improved since then, though their confidence intervals still include zero: We are Changing our Developer Productivity Experiment Design - METR

In particular, they suggest that the study underestimates AI productivity boosts because the people who benefit from it the most no longer want to participate in studies where they might get assigned the no-AI control group.

]]>
https://discourse.haskell.org/t/yet-another-opinion-on-llms-hasufells-blog/13775?page=3#post_42 Sun, 15 Mar 2026 23:21:12 +0000 discourse.haskell.org-post-54218
Sneak Peek: Bolt Math Yes, I went looking in numhask and you might be right.

QuotientField seems to work very well in an action context, but the other heterogeneous bits also look suspiciously like actions in disguise.

]]>
https://discourse.haskell.org/t/sneak-peek-bolt-math/13766#post_19 Sun, 15 Mar 2026 22:48:06 +0000 discourse.haskell.org-post-54217
Yet another opinion on LLMs · Hasufell's blog
hasufell:

E.g. I prefer tools that assist me with correctness over tools that assist me with productivity. I find that much more pleasing. Maybe not everyone thinks that way?

Is it really that and not just that it changes what you focus on minute to minute when working on something? Because one can usually trade productivity for correctness both ways. Of course it gets murky when a tool improves productivity while making correctness worse, which is the case with LLMs. But the general principle still holds imo.

]]>
https://discourse.haskell.org/t/yet-another-opinion-on-llms-hasufells-blog/13775?page=2#post_41 Sun, 15 Mar 2026 21:58:30 +0000 discourse.haskell.org-post-54216
Yet another opinion on LLMs · Hasufell's blog The approach I’ve ended up on is essentially “pair programming”. Discuss approaches with it, come up with a plan, let it code it if you want, then look it over. Got a bug? It’ll suggest tests and investigations; if you think it’s missing something you can tell it. Just adds a huge amount of value this way.

On the other hand, if you just tell it “go implement this feature” like a manager, it might not do so well. Possibly a multi-agent approach where you have one instance doing the work and another reviewing and the two iterating might work: I’ve heard of people doing this but I haven’t tried it.

]]>
https://discourse.haskell.org/t/yet-another-opinion-on-llms-hasufells-blog/13775?page=2#post_40 Sun, 15 Mar 2026 21:56:10 +0000 discourse.haskell.org-post-54215
Sneak Peek: Bolt Math I cut my Haskell baby teeth on that library, it was a wealth of archeological knowledge. It had a History monad that became my perf library and really should be a better known pattern for convergence and anything loopy, and numhask & harpie are spiritual descendents of the patterns in there.

It was too disconnected from everything else to be practical.

]]>
https://discourse.haskell.org/t/sneak-peek-bolt-math/13766#post_18 Sun, 15 Mar 2026 20:01:04 +0000 discourse.haskell.org-post-54214
Yet another opinion on LLMs · Hasufell's blog I probably should have said “wide” rather than “deep”. As I said, it’s like a junior-mid developer, but nevertheless knows about every little thing about every topic.

]]>
https://discourse.haskell.org/t/yet-another-opinion-on-llms-hasufells-blog/13775?page=2#post_39 Sun, 15 Mar 2026 19:52:03 +0000 discourse.haskell.org-post-54213
Yet another opinion on LLMs · Hasufell's blog
hasufell:
  • claude sonnet 4.6 via claude.ai
  • GPT-4.1 sometimes via github
  • Gemini 3 via google

All of them are crap.

GitHub’s default use of GPT-4.1 is incredibly stupid (unless they want users to think this feature is just a quirk). Even Haiku has more smarts ancient PoS.

Gemini can be hit or miss, depending on you getting served by the “flash” or “pro”. But even Pro is… not that great.

]]>
https://discourse.haskell.org/t/yet-another-opinion-on-llms-hasufells-blog/13775?page=2#post_38 Sun, 15 Mar 2026 18:10:13 +0000 discourse.haskell.org-post-54211
Yet another opinion on LLMs · Hasufell's blog
reuben:

I personally don’t think it’s a good idea to avoid anthropomorphic language. It’s useful (not just for whimsical purposes) to say things like “Claude thought I meant “linear” in the sense of linear algebra, but I actually meant it in the sense of linear logic, let me clarify that. I would say that having beliefs and reasoning is (sometimes) a good effective description of what a complicated computer system is doing.

That’s a good example of precisely what I try to avoid. But Dijkstra has covered this topic comprehensively. I don’t need to reiterate it.

]]>
https://discourse.haskell.org/t/yet-another-opinion-on-llms-hasufells-blog/13775?page=2#post_37 Sun, 15 Mar 2026 16:29:30 +0000 discourse.haskell.org-post-54209
Yet another opinion on LLMs · Hasufell's blog Never “he” or “she” - always “it.”

]]>
https://discourse.haskell.org/t/yet-another-opinion-on-llms-hasufells-blog/13775?page=2#post_36 Sun, 15 Mar 2026 16:16:52 +0000 discourse.haskell.org-post-54208
Yet another opinion on LLMs · Hasufell's blog Thanks! Yes, I’m aware - the Chinese room argument (or rather the many critiques of it) is a good example of why I think the “it’s just doing autocomplete” argument is not at all convincing. But yes, I agree there’s a large literature of opinions and arguments here that many smart people have contributed to.

I personally don’t think it’s a good idea to avoid anthropomorphic language. It’s useful (not just for whimsical purposes) to say things like “Claude thought I meant “linear” in the sense of linear algebra, but I actually meant it in the sense of linear logic, let me clarify that. I would say that having beliefs and reasoning is (sometimes) a good effective description of what a complicated computer system is doing.

Anyway, I don’t want to derail the discussion, so let me leave it there.

]]>
https://discourse.haskell.org/t/yet-another-opinion-on-llms-hasufells-blog/13775?page=2#post_35 Sun, 15 Mar 2026 16:14:51 +0000 discourse.haskell.org-post-54207
Yet another opinion on LLMs · Hasufell's blog My approach is to avoid all anthropomorphic language, as Dijkstra warned. Just substitute “AI” with “computer” (or another piece of technology, like GHC) and the silliness is highlighted directly, e.g. “I asked the computer”, don’t say “it wants to” or “it makes mistakes” — a computer program does what it’s programmed to. In the engineering world we don’t say that software “makes mistakes”, we call those errors or failures, or some other technical language. No one says that a Bayesian spam filter “makes mistakes” if some spam gets through.

Anthropomorphic language is fun and whimsical in a Terry Pratchett kind of way, but only if it’s self-aware, or tongue-in-cheek. I think the problem with LLMs and “AI” is that people seem to forget they’re doing it, and it maybe reflects deeper perceptions.

]]>
https://discourse.haskell.org/t/yet-another-opinion-on-llms-hasufells-blog/13775?page=2#post_34 Sun, 15 Mar 2026 16:04:36 +0000 discourse.haskell.org-post-54206
Yet another opinion on LLMs · Hasufell's blog
reuben:

do we really have a clear enough characterization of what reasoning is to be able to say that LLMs are not doing it?

This sort of thing has been studied quite a bit in philosophy. This article is a good starting point if you are interested: The Chinese Room Argument (Stanford Encyclopedia of Philosophy)

]]>
https://discourse.haskell.org/t/yet-another-opinion-on-llms-hasufells-blog/13775?page=2#post_33 Sun, 15 Mar 2026 15:46:19 +0000 discourse.haskell.org-post-54205
Yet another opinion on LLMs · Hasufell's blog Nicely put - I totally agree.

]]>
https://discourse.haskell.org/t/yet-another-opinion-on-llms-hasufells-blog/13775?page=2#post_32 Sun, 15 Mar 2026 15:41:10 +0000 discourse.haskell.org-post-54204
Yet another opinion on LLMs · Hasufell's blog
hasufell:

I prefer tools that assist me with correctness over tools that assist me with productivity. I find that much more pleasing.

Quality over quantity - I agree :grin:

]]>
https://discourse.haskell.org/t/yet-another-opinion-on-llms-hasufells-blog/13775?page=2#post_31 Sun, 15 Mar 2026 15:40:13 +0000 discourse.haskell.org-post-54203
Yet another opinion on LLMs · Hasufell's blog
reuben:

My sense is that part of the disparity between opinion comes from the ideological positions

That’s an interesting perspective.

But I’d not characterize it so much as ideological positions, but more about what type of user experience people have and expect.

To that end, it seems we need to answer all the questions of workflow, utility and so on. E.g. I’m not interested in using them for coding, because, well… I don’t perceive coding as a nuisance.

Why are some people fine doing vibe coding and others are not? I don’t think we can answer this question merely by focusing on the definition of reasoning or intelligence. These are all user experiences.

E.g. I prefer tools that assist me with correctness over tools that assist me with productivity. I find that much more pleasing. Maybe not everyone thinks that way?

]]>
https://discourse.haskell.org/t/yet-another-opinion-on-llms-hasufells-blog/13775?page=2#post_30 Sun, 15 Mar 2026 15:27:59 +0000 discourse.haskell.org-post-54202
Yet another opinion on LLMs · Hasufell's blog My sense is that part of the disparity between opinion comes from the ideological positions on the nature of intelligence that the two camps have here. That is, if you believe that LLMs are a fancy autocomplete that merely imitates semantic content, I think you’ll tend to interpret its failures as evidence of a lack of reasoning ability (rather than a more anthropomorphic explanation like: it didn’t have enough context to know what to do).

I tend to agree with the scepticism about their ability to do genuinely difficult tasks without a lot of context. But I’m not really convinced that it stems from a fundamental problem with the technology; do we really have a clear enough characterization of what reasoning is to be able to say that LLMs are not doing it? Rather, I would say that they often reason rather well, and also often fail to reason.

]]>
https://discourse.haskell.org/t/yet-another-opinion-on-llms-hasufells-blog/13775?page=2#post_29 Sun, 15 Mar 2026 14:36:52 +0000 discourse.haskell.org-post-54199
Issue 515 :: Haskell Weekly newsletter The last successful newsletter email happened on February 12. Since then, I don’t think any emails have gone out. I only noticed this myself a few days ago. I haven’t figured out why this happened yet. It doesn’t appear to be related to any code changes, and it doesn’t appear to be malicious.

]]>
https://discourse.haskell.org/t/issue-515-haskell-weekly-newsletter/13785#post_3 Sun, 15 Mar 2026 13:43:16 +0000 discourse.haskell.org-post-54198
Functional Valhalla? The vector package comes with a way to derive Unbox instances using generics: Data.Vector.Unboxed

]]>
https://discourse.haskell.org/t/functional-valhalla/13798#post_2 Sun, 15 Mar 2026 10:24:41 +0000 discourse.haskell.org-post-54197
Yet another opinion on LLMs · Hasufell's blog
AshleyYakeley:

I have found them excellent for understanding new technologies (especially how to use package X with library Y in situation Z), general planning design and architecture, debugging and analysis (e.g. “why is this SQL query so slow and what are some alternative approaches”), code review, and a certain amount of code writing on existing projects.

This might not apply to you. But in general if someone tells me they consider these tools to have “deep experience” it’s a huge red flag to me. Because this comes with a notion of competence these tools just can’t have today. Which in turn means that person will likely be far too trusting of the output those tools. Resulting in problems down the road.

They sometimes make basic mistakes that really only an idiot would make. Sure it’s a very useful idiot for certain tasks. But elevating them to the status of “expert” seems highly problematic to me.

I find it far more helpful to think of it as my “idiot bot” that’s hooked up to “all” the worlds knowledge. Because these tools already do far too much to make you trust their output, so calling it an idiot really only partially offsets the bias using them fosters within me.

]]>
https://discourse.haskell.org/t/yet-another-opinion-on-llms-hasufells-blog/13775?page=2#post_28 Sun, 15 Mar 2026 09:22:46 +0000 discourse.haskell.org-post-54196
Yet another opinion on LLMs · Hasufell's blog
Swordlash:

I honestly never had if hallucinating, but maybe I didn’t notice?

I find it hard to imagine how you used those tools without it hallucinating at some point. Maybe you mentally classified those issues as it “just being wrong” or something?

Because for me it happens pretty often. And yes, with “reasoning” often they will correct themselves before presenting you a final solution. But sometimes they won’t. And sure we can say any particular hallucination could have been prevented if there were more context, if the prompt had been better, or some other reason. But the whole issue is that one can’t know what input will lead to hallucinations beforehand.

Which in turn means any output should ideally be considered untrustworthy. But that is just not a cognitive load that people can maintain in the long run. I had some experience in fields where safety and defects come with much higher costs and even there, long before AI, it was hard to ensure outputs are properly vetted and checked. And we had some high profile examples of such issues in aviation of all fields recently.

It’s clear to me in practice most companies will either accept a higher defect rate in exchange for higher productivity or they end up taking additional measures (more types, more tests, more invariants, more specific context) in order to minimize the risk of defects which eats into the increased productivity. And for the later situation it’s very unclear to me how much of a win those tools really are.

]]>
https://discourse.haskell.org/t/yet-another-opinion-on-llms-hasufells-blog/13775?page=2#post_27 Sun, 15 Mar 2026 08:50:16 +0000 discourse.haskell.org-post-54195
Issue 515 :: Haskell Weekly newsletter Hi, thank you for editing enjoyable newsletter every week! I’ve been really enjoying the posts listed in the newsletters.

By the way, I haven’t received Haskell Weekly to my GMail address almost for a month and it seems that they are not even misclassified in the SPAM folder. Although I think I hadn’t unsubscribe it, I tried to re-register my email last week, but it seems that the most recent issue doesn’t reach me either.

Is there anyone encountering the same issue? What should I do for the next step? Also, I would be willing to inform my concrete e-mail address via DM or something.

]]>
https://discourse.haskell.org/t/issue-515-haskell-weekly-newsletter/13785#post_2 Sun, 15 Mar 2026 06:51:26 +0000 discourse.haskell.org-post-54192
Yet another opinion on LLMs · Hasufell's blog
hasufell:

I’m starting to believe that not everyone actually rigorously questions the output. There have been cases where they suggested me to adjust tests (as in: actually break them) to fix a problem.

They really really should question it. Always, because I have noticed this too - I think it is an artifact of how models have been specifically trained to make unprompted suggestions almost no matter what, because if the problem space is sufficiently constrained (ie, you show it an actually ‘perfect’ piece of code of sufficient complexity), it will still try to fill the suggestions void in one of several ways:

  • copy and paste snippets of your code verbatim and tell you it made changes and improvements
  • make meaningless syntactic changes that have no effect on program structure
  • suggesting increasingly bizarre nomenclature
  • actually start breaking shit like you said
]]>
https://discourse.haskell.org/t/yet-another-opinion-on-llms-hasufells-blog/13775?page=2#post_26 Sun, 15 Mar 2026 03:31:48 +0000 discourse.haskell.org-post-54191
Yet another opinion on LLMs · Hasufell's blog
AshleyYakeley:

I honestly think the difference is the models we’re using.

I don’t think that’s the case.

Yesterday I used Claude Opus 4.6 to debug an issue on github windows CI with exec format error that popped up recently (despite the CI not having changed itself in half a year).

All the models were absolutely useless and started making stuff up that I knew to be wrong.

Instead, I think there are two key differences about how we use LLMs.

Low context, difficult problems

For one, I try to use them in cases, where I don’t quite know myself on how to go about the problem… that means

  • there’s not a whole lot of context sometimes
  • I can’t provide a super specific prompt
  • it requires actual reasoning and following instincts you may have developed over the years

In this case the probabilistic approach is breaking down and they’re absolutely atrocious. The only way to have them succeed in such a case is to let them cook and iterate over the problem on their own. They’ll eventually bruteforce their way through it, I guess. It’s also a great proof that they don’t actually reason and they will never. They just give that illusion.

I do not believe that they will improve in this area significantly over the next few years. This is rather part of their nature.

They succeed more regularly in cases where you have:

  • a whole lot of context
  • a tiny, but very specific problem

But for this case, I usually know myself what to do and see interacting with an LLM prompt more of a nuisance.

Questioning the output

I’m starting to believe that not everyone actually rigorously questions the output. There have been cases where they suggested me to adjust tests (as in: actually break them) to fix a problem.

The same goes for “explain this codebase to me”. You’ll never really know how accurate it is. There’s no immediate feedback. You’ll potentially operate on false mental models, until you hit a problem you can’t wrap your head around and then… guess what… you ask the LLM to solve it for you, instead of questioning your mental models. Then you move on.

I find this rather scary and I’m a bit confused how so many engineers drop most of their “correctness obsession” in favor of something that resembles more of a gambling machine and is in fact quite addictive.

Yes, you can get results, but it appears it’s much easier when you embrace the vibe and stop questioning and leave your doubts behind. It’s not about finding truths anymore, it’s about the experience.

]]>
https://discourse.haskell.org/t/yet-another-opinion-on-llms-hasufells-blog/13775?page=2#post_25 Sun, 15 Mar 2026 03:05:50 +0000 discourse.haskell.org-post-54190
Sneak Peek: Bolt Math Great minds think alike - that is precisely what I did for the alternative homogenous operators that I mentioned but didn’t post - I might as well since you all really want to make sure they exist :slight_smile:

module Bolt.Math.Syntax.Operators.Simple where

import Bolt.Math.Internal.Prelude

import Bolt.Math.Addition
import Bolt.Math.Multiplication
import Bolt.Math.Exponentiation

infixr 8  ^
infixl 7  *, /
infixl 6  +, -

(+) :: (Addition a a, Sum a a ~ a) => a -> a -> a
a + b = plus a b

(-) :: (Subtraction a a, Delta a a ~ a) => a -> a -> a
a - b = minus a b

(*) :: (Multiplication a a, Product a a ~ a) => a -> a -> a
a * b = times a b

(/) :: (Division a a, Ratio a a ~ a) => a -> a -> a
a / b = divides a b

(^) :: (Exponentiation a a, Power a a ~ a) => a -> a -> a
a ^ b = pow a b

Obviously mine are eta expanded but that’s a matter of preference. I will also probably be adopting specialized numhask-style left and right act operators because I have decided that I like them, though they will get their own particular operator module in Bolt.Math.Syntax.Operators.Act or something.

]]>
https://discourse.haskell.org/t/sneak-peek-bolt-math/13766#post_17 Sun, 15 Mar 2026 02:02:36 +0000 discourse.haskell.org-post-54189
Yet another opinion on LLMs · Hasufell's blog Hi! Thanks for the blog post and for your work on haskell.
When I saw your opinion, I became curious what you would think about this Karpathy interview: Andrej Karpathy — AGI is still a decade away

It was trending on HN a while back so you might have seen it already. Some parts that might be relevant:

  • current AI as more or less autocomplete at best
  • march of nines: quality of production-grade software.
    I think this relates to your concerns about using LLM to build things.
    Personally, I have difficulty imagining myself trusting it for things that can’t be trivially verified.
  • as an AI expert (Karpathy), telling people when not to use AI
]]>
https://discourse.haskell.org/t/yet-another-opinion-on-llms-hasufells-blog/13775?page=2#post_24 Sun, 15 Mar 2026 01:56:08 +0000 discourse.haskell.org-post-54188
Sneak Peek: Bolt Math It seems to me that you might be able to get the flexibility that you need and preserve type inference in the common case, if you use the operators as constrained aliases for the heterogeneous word form:

class Addition a b where
  type Sum a b :: Type
  plus :: a -> b -> Sum a b

-- | Constrained version of 'plus', for arithmetic convenience.
(+) :: (Addition a a, Sum a a ~ a) => a -> a -> a
(+) = plus
]]>
https://discourse.haskell.org/t/sneak-peek-bolt-math/13766#post_16 Sun, 15 Mar 2026 01:33:15 +0000 discourse.haskell.org-post-54187
Yet another opinion on LLMs · Hasufell's blog I honestly never had if hallucinating, but maybe I didn’t notice? If you write a proper Claude.md, teach it a typechecking skill, tell it to run tests, it’s gonna iterate a few times until eventually it gets it right. It might be slow but at least it’s correct.

]]>
https://discourse.haskell.org/t/yet-another-opinion-on-llms-hasufells-blog/13775?page=2#post_23 Sat, 14 Mar 2026 22:46:52 +0000 discourse.haskell.org-post-54186
Yet another opinion on LLMs · Hasufell's blog exactly this: give an LLM a stub function with some very distinct types and a defined behavior, and it can fill that in without issues. add a few unit tests to that to actually verify it, and it is having a hard time to produce anything that’s fundamentally wrong. the coding style? from meh to horrible at times, but if prodded enough, it can clean that up too

doing a refactor and now you got a bunch of simple compile errors all throughout the project? a coding agent is perfectly capable of compiling, fixing those locally, repeat. even isn’t that bad at writing sed scripts that to a lot of that mechanically (haskell has quite a big advantage here since it is nearly impossible to get this wrong and the code to still compile)

so there are quite a few scenarios where you can effectively use them. you don’t even have to touch your editor to actually write out most of your actual, just stub out a structure that makes sense, mark problematic sections that need rework, point out edge cases that still need to be tested/verified, and in such a limited scope, that works perfectly well. just don’t let it off the leash, or you’ll end up with a mess that nobody (human or ai) can fix or maintain past a certain size

]]>
https://discourse.haskell.org/t/yet-another-opinion-on-llms-hasufells-blog/13775?page=2#post_22 Sat, 14 Mar 2026 21:12:43 +0000 discourse.haskell.org-post-54185
Yet another opinion on LLMs · Hasufell's blog
Ambrose:

hallucinates

Ditto on hallucinations - it is so easy to lead it into spewing garbage; I can tell that it is garbage because I have a lot of experience in eg that particular domain, but ye gods, the ease with which it generates something that to a layman sounds valid and even ‘has results’ in a cursory search, and all its really done is blend two fields because its confused one homonym in one field for another - it gives me pause.

It isn’t so much that it can’t generate useful and correct output (it can, especially for tasks that have been completed before), its that it first and foremost generates plausible output, so I heavily lean in favor of the difficulty of verifying the correctness of the output as being an inhibition to the use of AI, distinct from the moral issues of how they trained it.

So, best used like a smart autocomplete for rote syntactic transformations, organization and search, and nomenclature suggestions, rather than deep pontificating. I swear it spends half the energy generating empty platitudes about how amazing my approach is and how deep my questions are, because at their core those corporate AIs are not designed to answer your questions, they are designed to keep you using them, and making you feel like they answered your question is how they go about doing that regardless of what the output actually was.

But for those limited tasks, where they can not escape into some imaginary hallucinated domain, they are very helpful.

]]>
https://discourse.haskell.org/t/yet-another-opinion-on-llms-hasufells-blog/13775#post_21 Sat, 14 Mar 2026 20:51:58 +0000 discourse.haskell.org-post-54184
Announcing `crem` This is nice! I like the simplicity combined with type safety, I could imagine trying to apply this for domain modelling as a more formalised diagramming, besides the fact it could be executed.

]]>
https://discourse.haskell.org/t/announcing-crem/6012#post_8 Sat, 14 Mar 2026 20:51:57 +0000 discourse.haskell.org-post-54183
Yet another opinion on LLMs · Hasufell's blog idk I have unlimited usage at work so I just use Opus and it immediately hallucinates [1] if I have it generate more than like a function. Which isn’t useless - it’s a husk I can fill with value. Saves me some typing of the general boilerplate and imports. Not a bad use of like 50c.

Also if I do keep it small, the functions it generates are pretty bad? So much case splitting - I would ask a junior to improve it before merge. Instead with Claude, I improve it and have Claude tell me how and why my Haskell is better than its suggestion. Opus is good at that!

inb4 the general follow-ups of all the stuff I now have to do to get this thing to maybe be as useful as my bare hands and brain in emacs and ghci lolol

]]>
https://discourse.haskell.org/t/yet-another-opinion-on-llms-hasufells-blog/13775#post_20 Sat, 14 Mar 2026 17:53:43 +0000 discourse.haskell.org-post-54182
Yet another opinion on LLMs · Hasufell's blog I honestly think the difference is the models we’re using. The new things I learn from them are “how to do things” – I then do the things and it works, so I am directly verifying how accurate the information is. They do occasionally make mistakes, but it’s usually pretty simple things.

On the other hand, I haven’t tried getting them to e.g. come up with new abstractions in a domain, or do larger-scale creation – that might be beyond their capabilities.

]]>
https://discourse.haskell.org/t/yet-another-opinion-on-llms-hasufells-blog/13775#post_19 Sat, 14 Mar 2026 17:48:57 +0000 discourse.haskell.org-post-54181