Open Menu
AllLocalCommunitiesAbout
lotide
AllLocalCommunitiesAbout
Login

Scientists should use AI as a tool, not an oracle

⁨15⁩ ⁨likes⁩

Submitted ⁨⁨1⁩ ⁨year⁩ ago⁩ by ⁨bot@lemmy.smeargle.fans [bot]⁩ to ⁨hackernews@lemmy.smeargle.fans⁩

https://www.aisnakeoil.com/p/scientists-should-use-ai-as-a-tool

HN Discussion

source

Comments

Sort:hotnewtop
  • lvxferre@mander.xyz ⁨1⁩ ⁨year⁩ ago

    Regarding linguistics, the usage of machine “learning” models feels in some cases justified. Often you have that huge amount of repetitive data that you need to sort out and generalise, M"L" is great for that.

    For example, let’s say that you’re studying some specific vowel feature. You’re probably recording the same word thrice for each informer; and there’s, like, 15 words? You’ll want people from different educational backgrounds, men and women, and different ages, so let’s say 10 informers. From that you’re already dealing with 450 recordings, that you’ll need to throw into Praat, identify the relevant vowel, measure the average F₁ and F₂ values.

    It’s an extremely simple task, easy to generalise, but damn laborious to perform at hand. That’s where machine learning should kick in; you should be able to teach it “this is a vowel, look those smooth formants, we want this” vs. “this is a fricative, it has static-like noise, disregard it”, then feed it the 450 audio files, and have it output F1 and F2 values for you in a clean .csv

    And, if you’re using it to take the conclusions for you, you probably suck as a researcher.


    From the HN comments. “A”, “B”, “C” are different posters; the numbers are just to refer to the comments.

    [A1] I feel like 90% of AI discussions online these days can be shut down with “a probabilistic syllable generator is not intelligence”

    I also feel so. But specially trashy people want to believe; they have a moral flaw called “faith”, and they want to roll on it like pigs would roll on mud (another type of filth). So every time that you state the obvious, they will dispute it, and since moronic trash is a dime a dozen, their crowds are effectively working like a big moustached “MRRROOOOOOO”-ing sealion.

    Further quotes refer to comments replying to the above, as they exemplify it.

    [B] [answering the above] How do you define intelligence?

    That user is playing the “dancing chairs” game, with definitions. If you do provide a definition, users like this are prone to waste your time chain-gunning “ackshyually” statements. Eventually evolving it into appeal to ignorance + ad nauseam.

    [C1] Humans are not fact machines, we are often wrong. Do humans not have intelligence?

    [A2] Like clockwork, out come the “but humans” deflections. An LLM is not a human-like intelligence. This is patently obvious, such comparisons are nonsensical and just further the problem of people anthropomorphizing a tool and treating it like an oracle.

    [C2] You didn’t answer the question.

    [A3] I did, I said they aren’t human-like intelligences, so countering with “humans make mistakes, are humans not intelligent?” is drawing a false equivalence between humans and LLMs. // Since we do not possess a definition of intelligence that isn’t human-like, it would be meaningless to argue if LLMs are intelligent in general. All that can be said is that they are not intelligent in the way that humans are.

    A is calling C1 a deflection. I’d go further: it’s whataboutism + extended analogy.

    C2 is basically “I demand you to bite my whataboutism, REEEEEEE!”, potentially disguisedx under feigned illiteracy. A2 does answer C1, albeit indirectly.

    At this rate A bite the bait in A3; “All that can be said is that they are not intelligent in the way that humans are.” opens room for “ackshyually” and similar idiocies.

    Image

    This is from the text linked in the OP. The very discussion in “Reddit LARPs as h4x0rz @ ycombinator dot com” is a great example of that.

    source
  • Mikufan@ani.social ⁨1⁩ ⁨year⁩ ago

    LLMs are gambling not a tool.

    source
    • lvxferre@mander.xyz ⁨1⁩ ⁨year⁩ ago

      Note that machine “learning” (the topic) is considerably wider than LLMs.

      That said even LLMs are a tool. And they’re useful as such, as long as you don’t make believe that they understand anything, or that they would be intelligent. They can’t follow a basic reasoning, but they’re rather good at retrieving human produced info.

      source