• 33 Posts
  • 699 Comments
Joined 3 years ago
cake
Cake day: June 9th, 2023

help-circle





  • Yes, the point is LLMs are AI.

    So, we’ve gone full circle. LLMs is a sub-category within the collection of ML techniques within the collection of AI techniques. Point is, AI is just a term here. A label, that indicates nothing about the abilities of LLMs to exert any form of intelligence or reasoning. In other words, the whole field of AI could (should?) have been named “computational statistics” or “mathematics applied to numerical datasets” (or whatever else you want, really…), and LLMs would absolutely belong to “CoStats”/“MAND” fields, for the same reason we say they relate to “AI” today, it’s just that nobody would be silly enough to call them “artificially intelligent”.

    ML is AI too. But sales didn’t call it that because AI had the reputation of “just brute force with some heuristics”.

    What sales are you even thinking about? What do even presume the market for ML algorithms to be? Nobody was shopping for support vector machines as a service, or spending tokens on linear regression, or using convolutional neural networks via API before the current LLM craze. What the field is experiencing right now is unheard of. Never before did the private sector jump gun on a niche technique and spent trillions to package, anthropomorphise and market it as if “AI has finally been figured out, and we happen not only to own it, but also to sell it to you”. This deceptive rhetoric would be a much tougher sell if some early computer scientists hadn’t happened to name their field “AI”.


  • Was there even a point you wanted to make? I’m not sure exactly where you are heading with all this.

    I mean, “machine learning” was a marketing term invented exactly to avoid the A in AI.

    No, it’s not. ML is a sub-category within the collection of AI techniques that describes those algorithms whose behaviour is the result of fitting a training data-set to a pre-defined model by minimising an agreed-upon error function. For the longest of times, we were just calling that “statistics”, and many ML techniques and algorithms predate computers by centuries. Your mean-squares curve fitting? …qualifies as ML. That is to say, ML is all AI, but not all AI is ML.

    LLMs are no different than function fitting with mean-squares. They are not magical, they are not black-boxes: they are fully described and completely predictable.




  • ok, but there’s not much substance to your comment besides unsubstantiated “zealotry” towards obsidian and some general hot takes against lemmy and the FOSS community through which it emerged.

    Maybe you could start listing out a few aspects and features of obsidian that you deem so important and unique, and I’m sure that you may discover a few very compelling alternatives.

    As far as I’m concerned, I’m all set with https://triliumnotes.org/ . It’s not just a more versatile and capable note taking app, it’s also one that I can deploy simultaneously “local first” and “as a web service”, so my notes are reachable everywhere (even where I’m not allowed to install the heavy client).



  • If you drop the plaintext requirement (which IMO is anachronistic, if not for the necessity to fend against a potentially turning hostile developer in a close-source set-up), you may find https://triliumnotes.org/ liberating.

    If you must stick to the “notes as plain text files” paradigm, siyuan is better than obsidian in about every aspect, and logseq in other, more niche ones. Trilium is better than them all (IMHO), being the only one that does “note as data” correctly and efficiently (you don’t have the same data model divide like seen in notion between notes and databases).




  • Also, LLMs are not AI (in the sense that most people would deem meaningful): there’s no reasoning involved, just a convincing illusion of it, served by extensive knowledge compaction and next token prediction. That is not to say that LLMs are useless, or that the sort of all-powerful AI people fantasize about cannot exist (I really don’t know, no people do). That is to say that it’s not for now, and that before it happens, we will see another long AI winter before the emergence of something fundamentally more convincing than LLMs.



  • Hard disagree, the more you look at it, the more this can be described as “a product to convince people that discuss technology online that this is a product for tons of people”. Most people need HDMI more than they need record-matching single-thread performance. Most people need more RAM because almost all they do is web-based and not MacOS native. Most people need I/O because they have mice, weird peripherals, and tons of usb drives, mostly USB-A.

    That said, it’s a magnificent second or third device for the tech elite that’s already committed to the Apple ecosystem (having nothing on-device, whose peripherals and I/O needs are covered by more general-purpose computers) and who will absolutely brag that the performance of this lets them do lots of coding (using it as a metric for what a good computer is, in their eyes). Strangely both things can be true at the same time!

























Moderates