Open Menu
AllLocalCommunitiesAbout
lotide
AllLocalCommunitiesAbout
Login

Breast Cancer

⁨1557⁩ ⁨likes⁩

Submitted ⁨⁨8⁩ ⁨months⁩ ago⁩ by ⁨fossilesque@mander.xyz⁩ to ⁨science_memes@mander.xyz⁩

https://mander.xyz/pictrs/image/61447ec9-3349-43fe-a7dd-85a17cefef94.jpeg

source

Comments

Sort:hotnewtop
  • parpol@programming.dev ⁨8⁩ ⁨months⁩ ago
    [deleted]
    source
    • FierySpectre@lemmy.world ⁨8⁩ ⁨months⁩ ago

      Using AI for anomaly detection is nothing new though. Haven’t read the article (and I doubt it’s going to be that technical) but usually this uses a completely different technique than the AI that comes to mind when people think of AI these days.

      source
      • Johanno@feddit.org ⁨8⁩ ⁨months⁩ ago

        That’s why I hate the term AI. Say it is a predictive llm or a pattern recognition model.

        source
        • -> View More Comments
      • PM_ME_VINTAGE_30S@lemmy.sdf.org ⁨8⁩ ⁨months⁩ ago

        Haven’t read any article about this specific ‘discovery’ but usually this uses a completely different technique than the AI that comes to mind when people think of AI these days.

        From the conclusion of the actual paper:

        Deep learning models that use full-field mammograms yield substantially improved risk discrimination compared with the Tyrer-Cuzick (version 8) model.

        If I read this paper correctly, the novelty is in the model, which is a deep learning model that works on mammogram images + traditional risk factors.

        source
        • -> View More Comments
    • SomeGuy69@lemmy.world ⁨8⁩ ⁨months⁩ ago

      It’s really difficult to clean those data. Another case was, when they kept the markings on the training data and the result was, those who had cancer, had a doctors signature on it, so the AI could always tell the cancer from the not cancer images, going by the lack of signature. However, these people also get smarter in picking their training data, so it’s not impossible to work properly at some point.

      source
    • EatATaco@lemm.ee ⁨8⁩ ⁨months⁩ ago

      Citation please?

      source
    • earmuff@lemmy.dbzer0.com ⁨8⁩ ⁨months⁩ ago

      That’s the nice thing about machine learning, as it sees nothing but something that correlates. That’s why data science is such a complex topic, as you do not see errors this easily. Testing a model is still very underrated and usually there is no time to properly test a model.

      source
  • superkret@feddit.org ⁨8⁩ ⁨months⁩ ago

    Why do I still have to work my boring job while AI gets to create art and look at boobs?

    source
    • SomeGuy69@lemmy.world ⁨8⁩ ⁨months⁩ ago

      Because life is suffering and machines dream of electric sheeps.

      source
      • Empricorn@feddit.nl ⁨8⁩ ⁨months⁩ ago

        I dream of boobs.

        source
  • ALoafOfBread@lemmy.ml ⁨8⁩ ⁨months⁩ ago

    Now make mammograms not $500 and have a 6 month waiting time and available for women under like 40.

    source
    • ConstantPain@lemmy.world ⁨8⁩ ⁨months⁩ ago

      It’s already this way in most of the world.

      source
      • ALoafOfBread@lemmy.ml ⁨8⁩ ⁨months⁩ ago

        Oh for sure. I only meant in the US where the university is located. But it’s already a useful breakthrough for everyone in civilized countries

        source
        • -> View More Comments
    • Mouselemming@sh.itjust.works ⁨8⁩ ⁨months⁩ ago

      Better yet, give us something better to do about the cancer than slash, burn, poison. Something that’s less traumatic on the rest of the person, especially in light of the possibility of false positives.

      source
      • Tja@programming.dev ⁨8⁩ ⁨months⁩ ago

        Also, flying cars and the quadrature of the circle.

        source
    • asbestos@lemmy.world ⁨8⁩ ⁨months⁩ ago

      I think it’s free in most of Europe, or relatively cheap

      source
    • Tja@programming.dev ⁨8⁩ ⁨months⁩ ago

      Done.

      source
  • cecinestpasunbot@lemmy.ml ⁨8⁩ ⁨months⁩ ago

    Unfortunately AI models like this one often never make it to the clinic. The model could be impressive enough to identify 100% of cases that will develop breast cancer. However if it has a false positive rate of say 5% it’s use may actually create more harm than it intends to prevent.

    source
    • Maven@lemmy.zip ⁨8⁩ ⁨months⁩ ago

      Another big thing to note, we recently had a different but VERY similar headline about finding typhoid early and was able to point it out more accurately than doctors could.

      But when they examined the AI to see what it was doing, it turns out that it was weighing the specs of the machine being used to do the scan… An older machine means the area was likely poorer and therefore more likely to have typhoid. The AI wasn’t pointing out if someone had Typhoid it was just telling you if they were in a rich area or not.

      source
      • KevonLooney@lemm.ee ⁨8⁩ ⁨months⁩ ago

        That’s actually really smart. But that info wasn’t given to doctors examining the scan, so it’s not a fair comparison. It’s a valid diagnostic technique to focus on the particular problems in the local area.

        “When you hear hoofbeats, think horses not zebras” (outside of Africa)

        source
        • -> View More Comments
      • Tja@programming.dev ⁨8⁩ ⁨months⁩ ago

        That is quite a statement that it still had a better detection rate than doctors.

        What is more important, save life or not offend people?

        source
        • -> View More Comments
    • Vigge93@lemmy.world ⁨8⁩ ⁨months⁩ ago

      That’s why these systems should never be used as the sole decision makers, but instead work as a tool to help the professionals make better decisions.

      Keep the human in the loop!

      source
    • ColeSloth@discuss.tchncs.de ⁨8⁩ ⁨months⁩ ago

      Not at all, in this case.

      A false positive of even 50% can mean telling the patient “they are at a higher risk of developing breast cancer and should get screened every 6 months instead of every year for the next 5 years”.

      Keep in mind that women have about a 12% chance of getting breast cancer at some point in their lives. During the highest risk years its a 2 percent chamce per year, so a machine with a 50% false positive for a 5 year prediction would still only be telling like 15% of women to be screened more often.

      source
    • CptOblivius@lemmy.world ⁨8⁩ ⁨months⁩ ago

      Breast imaging already relys on a high false positive rate. False positives are way better than false negatives in this case.

      source
      • cecinestpasunbot@lemmy.ml ⁨8⁩ ⁨months⁩ ago

        That’s just not generally true. Mammograms are usually only recommended to women over 40. That’s because the rates of breast cancer in women under 40 are low enough that testing them would cause more harm than good thanks in part to the problem of false positives.

        source
        • -> View More Comments
    • snek@lemmy.world ⁨8⁩ ⁨months⁩ ago

      How would a false positive create more harm? Isn’t it better to cast a wide net and detect more possible cases? Then false negatives are the ones that worry me the most.

      source
      • cecinestpasunbot@lemmy.ml ⁨8⁩ ⁨months⁩ ago

        It’s a common problem in diagnostics and it’s why mammograms aren’t recommended to women under 40.

        Let’s say you have 10,000 patients. 10 have cancer or a precancerous lesion. Your test may be able to identify all 10 of those patients. However, if it has a false positive rate of 5% that’s around 500 patients who will now get biopsies and potentially surgery that they don’t actually need. Those follow up procedures carry their own risks and harms for those 500 patients. In total, that harm may outweigh the benefit of an earlier diagnosis in those 10 patients who have cancer.

        source
      • MonkeMischief@lemmy.today ⁨8⁩ ⁨months⁩ ago

        Well it’d certainly benefit the medical industry. They’d be saddling tons of patients with surgeries, chemotherapy, mastectomy, and other treatments, “because doctor-GPT said so.”

        But imagine being a patient getting physically and emotionally altered, plunged into irrecoverable debt, distressing your family, and it all being a whoopsy by some black-box software.

        source
        • -> View More Comments
  • yesman@lemmy.world ⁨8⁩ ⁨months⁩ ago

    The most beneficial application of AI like this is to reverse-engineer the neural network to figure out how the AI works. In this way we may discover a new technique or procedure, or we might find out the AI’s methods are bullshit. Under no circumstance should we accept a “black box” explanation.

    source
    • CheesyFox@lemmy.sdf.org ⁨8⁩ ⁨months⁩ ago

      good luck reverse-engineering millions if not billions of seemingly random floating point numbers. It’s like visualizing a graph in your mind by reading an array of numbers, except in this case the graph has as many dimensions as the neural network has inputs, which is the number of pixels the input image has.

      Under no circumstance should we accept a “black box” explanation.

      Go learn at least basic principles of neural networks, because this your sentence alone makes me want to slap you.

      source
      • thecodeboss@lemmy.world ⁨8⁩ ⁨months⁩ ago

        Don’t worry, researchers will just get an AI to interpret all those floating point numbers and come up with a human-readable explanation! What could go wrong? /s

        source
      • petrol_sniff_king@lemmy.blahaj.zone ⁨8⁩ ⁨months⁩ ago

        Hey look, this took me like 5 minutes to find.

        Censius guide to AI interpretability tools

        Here’s a good thing to wonder: if you don’t know how you’re black box model works, how do you know it isn’t racist?

        Here’s what looks like a university paper on interpretability tools:

        As a practical example, new regulations by the European Union proposed that individuals affected by algorithmic decisions have a right to an explanation. To allow this, algorithmic decisions must be explainable, contestable, and modifiable in the case that they are incorrect.

        Oh yeah. I forgot about that. I hope your model is understandable enough that it doesn’t get you in trouble with the EU.

        Oh look, here you can actually see one particular interpretability tool being used to interpret one particular model. Funny that, people actually caring what their models are using to make decisions.

        Look, maybe you were having a bad day, or maybe slapping people is literally your favorite thing to do, who am I to take away mankind’s finer pleasures, but this attitude of yours is profoundly stupid. It’s weak. You don’t want to know? It doesn’t make you curious? Why are you comfortable not knowing things? That’s not how science is propelled forward.

        source
        • -> View More Comments
    • CheeseNoodle@lemmy.world ⁨8⁩ ⁨months⁩ ago

      iirc it recently turned out that the whole black box thing was actually a bullshit excuse to evade liability, at least for certain kinds of model.

      source
      • Johanno@feddit.org ⁨8⁩ ⁨months⁩ ago

        Well in theory you can explain how the model comes to it’s conclusion. However I guess that 0.1% of the “AI Engineers” are actually capable of that. And those costs probably 100k per month.

        source
      • Atrichum@lemmy.world ⁨8⁩ ⁨months⁩ ago

        Link?

        source
        • -> View More Comments
      • Tryptaminev@lemm.ee ⁨8⁩ ⁨months⁩ ago

        It depends on the algorithms used. Now the lazy approach is to just throw neural networks at everything and waste immense computation ressources. Of course you then get results that are difficult to interpret. There is much more efficient algorithms that are working well to solve many problems and give you interpretable decisions.

        source
    • MystikIncarnate@lemmy.ca ⁨8⁩ ⁨months⁩ ago

      IMO, the “black box” thing is basically ML developers hand waiving and saying “it’s magic” because they know it will take way too long to explain all the underlying concepts in order to even start to explain how it works.

      I have a very crude understanding of the technology. I’m not a developer, I work in IT support. I have several friends that I’ve spoken to about it, some of whom have made fairly rudimentary machine learning algorithms and neural nets. They understand it, and they’ve explained a few of the concepts to me, and I’d be lying if I said that none of it went over my head. I’ve done programming and development, I’m senior in my role, and I have a lifetime of technology experience and education… And it goes over my head. What hope does anyone else have? If you’re not a developer or someone ML-focused, yeah, it’s basically magic.

      I won’t try to explain. I couldn’t possibly recall enough about what has been said to me, to correctly explain anything at this point.

      source
      • homura1650@lemm.ee ⁨8⁩ ⁨months⁩ ago

        The AI developers understand how AI works, but that does not mean that they understand the thing that the AI is trained to detect.

        For instance, the cutting edge in protein folding (at least as of a few years ago) is Google’s AlphaFold. I’m sure the AI researchers behind AlphaFold understand AI and how it works. And I am sure that they have an above average understanding of molecular biology. However, they do not understand protein folding better than the physisits and chemists who have spent their lives studying the field. The core of their understanding is “the answer is somewhere in this dataset. All we need to do is figure out how to through ungoddly amounts of compute at it, and we can make predictions”. Working out how to productivly throw that much compute power at a problem is not easy either, and that is what ML researchers understand and are experts in.

        In the same way, the researchers here understand how to go from a large dataset of breast images to cancer predictions, but that does not mean they have any understanding of cancer. And certainly not a better understanding than the researchers who have spent their lives studying it.

        An open problem in ML research is how to take the billions of parameters that define an ML model and extract useful information that can provide insights to help human experts understand the system (both in general, and in understanding the reasoning for a specific classification). Progress has been made here as well, but it is still a long way from being solved.

        source
        • -> View More Comments
    • match@pawb.social ⁨8⁩ ⁨months⁩ ago

      y = w^T^ x

      hope this helps!

      source
    • reddithalation@sopuli.xyz ⁨8⁩ ⁨months⁩ ago

      our brain is a black box, we accept that. (and control the outcomes with procedures, checklists, etc)

      It feels like lots of prefessionals can’t exactly explain every single aspect of how they do what they do, sometimes it just feels right.

      source
      • rekorse@lemmy.world ⁨8⁩ ⁨months⁩ ago

        What a vague and unprovable thing you’ve stated there.

        source
  • Wilzax@lemmy.world ⁨8⁩ ⁨months⁩ ago

    If it has just as low of a false negative rate as human-read mammograms, I see no issue. Feed it through the AI first before having a human check the positive results only. Save doctors’ time when the scan is so clean that even the AI doesn’t see anything fishy.

    Alternatively, if it has a lower false positive rate, have doctors check the negative results only. If the AI sees something then it’s DEFINITELY worth a biopsy. Then have a human doctor check the negative readings just to make sure they don’t let anything that’s worth looking into go unnoticed.

    Either way, as long as it isn’t worse than humans in both kinds of failures, it’s useful at saving medical resources.

    source
    • match@pawb.social ⁨8⁩ ⁨months⁩ ago

      an image recognition model like this is usually tuned specifically to have a very low false negative (well below human, often) in exchange for a high false positive rate (overly cautious about cancer)!

      source
    • Railing5132@lemmy.world ⁨8⁩ ⁨months⁩ ago

      This is exactly what is being done. My eldest child is in a Ph. D. program for human - robot interaction and medical intervention, and has worked on image analysis systems in this field. They’re intended use is exactly that - a “first look” and “second look”. A first look to help catch the small, easily overlooked pre-tumors, and tentatively mark clear ones. A second look to be a safety net for tired, overworked, or outdated eyes.

      source
    • Dkarma@lemmy.world ⁨8⁩ ⁨months⁩ ago

      You in QA?

      source
      • Dicska@lemmy.world ⁨8⁩ ⁨months⁩ ago

        Image

        source
      • Wilzax@lemmy.world ⁨8⁩ ⁨months⁩ ago

        HAHAHAHA thank fuck I am not

        source
    • UNY0N@lemmy.world ⁨8⁩ ⁨months⁩ ago

      Nice comment. I like the detail.

      For me, the main takeaway doesn’t have anything to do with the details though, it’s about the true usefulness of AI. The details of the implementation aren’t important, the general use case is the main point.

      source
  • Moah@lemmy.blahaj.zone ⁨8⁩ ⁨months⁩ ago

    Ok, I’ll concede. Finally a good use for AI. Fuck cancer.

    source
    • ilinamorato@lemmy.world ⁨8⁩ ⁨months⁩ ago

      It’s got a decent chunk of good uses. It’s just that none of those are going to make anyone a huge ton of money, so they don’t have a hype cycle attached. I can’t wait until the grifters get out and the hype cycle falls away, so we can actually get back to using it for what it’s good at and not shoving it indiscriminately into everything.

      source
      • bluewing@lemm.ee ⁨8⁩ ⁨months⁩ ago

        The hypesters and grifters do not prevent AI from being used for truly valuable things even now. In fact medical uses will be one of those things that WILL keep AI from just fading away.

        Just look at those marketing wankers as a cherry on the top that you didn’t want or need.

        source
        • -> View More Comments
      • Cethin@lemmy.zip ⁨8⁩ ⁨months⁩ ago

        Also, for GPU prices to come down. Right now the AI garbage is eating a lot of the GPU production, as well as wasting a ton of energy.

        source
        • -> View More Comments
      • Tja@programming.dev ⁨8⁩ ⁨months⁩ ago

        Those are going to make a ton of money for a lot of people. Every 1% fuel efficiency gained, every second saved in an industrial process, it’s hundreds of millions of dollars.

        You don’t need AI in your fridge or in your snickers, that will (hopefully) die off, but AI is not going away where it matters.

        source
        • -> View More Comments
      • RampantParanoia2365@lemmy.world ⁨8⁩ ⁨months⁩ ago

        A cure for cancer, if it can be literally nipped in the bud, seems like a possible money-maker to me.

        source
        • -> View More Comments
    • blackbirdbiryani@lemmy.world ⁨8⁩ ⁨months⁩ ago

      Honestly they should go back to calling useful applications ML (that is what it is) since AI is getting such a bad rap.

      source
      • medgremlin@midwest.social ⁨8⁩ ⁨months⁩ ago

        I once had ideas about building a machine learning program to assist workflows in Emergency Departments, and its’ training data would be entirely generated by the specific ER it’s deployed in. Because of differences in populations, the data is not always readily transferable between departments.

        source
      • 0laura@lemmy.dbzer0.com ⁨7⁩ ⁨months⁩ ago

        machine learning is a type of AI. scifi movies just misused the term and now the startups are riding the hype trains. AGI =/= AI. there’s lots of stuff to complain about with ai these days like stable diffusion image generation and LLMs, but the fact that they are AI is simply true.

        source
        • -> View More Comments
  • Snapz@lemmy.world ⁨8⁩ ⁨months⁩ ago

    And if we weren’t a big, broken mess of late stage capitalist hellscape, you or someone you know could have actually benefited from this.

    source
    • unconsciousvoidling@sh.itjust.works ⁨8⁩ ⁨months⁩ ago

      Yea none of us are going to see the benefits. Tired of seeing articles of scientific advancement that I know will never trickle down to us peasants.

      source
      • Telodzrum@lemmy.world ⁨8⁩ ⁨months⁩ ago

        Our clinics are already using ai to clean up MRI images for easier and higher quality reads. We use ai on our carb lab table to provide a less noisy image at a much lower rad dose.

        source
        • -> View More Comments
      • Tja@programming.dev ⁨8⁩ ⁨months⁩ ago

        … they said, typing on a tiny silicon rectangle with access to the whole of humanity’s knowledge and that fits in their pocket…

        source
    • venoft@lemmy.world ⁨8⁩ ⁨months⁩ ago

      I’m involved in multiple projects where stuff like this will be used in very accessible manners, so don’t get too pessimistic.

      source
  • PlantDadManGuy@lemmy.world ⁨8⁩ ⁨months⁩ ago

    Not my proudest fap…

    source
    • Emmie@lemm.ee ⁨8⁩ ⁨months⁩ ago

      Honestly with all due respect that is really shitty joke. It’s god damn breast cancer, opposite of hot

      source
      • PlantDadManGuy@lemmy.world ⁨8⁩ ⁨months⁩ ago

        Terrible things happen to people you love, you have two choices in this life. You can laugh about it or you can cry about it. You can do one and then the other if you choose. I prefer to laugh about most things and hope others will do the same. Cheers.

        source
        • -> View More Comments
    • tigeruppercut@lemmy.zip ⁨8⁩ ⁨months⁩ ago

      That’s a challenging wank

      Image

      source
      • sunbeam60@lemmy.one ⁨8⁩ ⁨months⁩ ago

        Man I miss him.

        source
  • ShinkanTrain@lemmy.ml ⁨8⁩ ⁨months⁩ ago

    I can do that too, but my rate of success is very low

    source
  • insufferableninja@lemdro.id ⁨8⁩ ⁨months⁩ ago

    pretty sure iterate is the wrong word choice there

    source
    • Peps@lemmy.world ⁨8⁩ ⁨months⁩ ago

      They probably meant reiterate

      source
      • Mouselemming@sh.itjust.works ⁨8⁩ ⁨months⁩ ago

        I think it’s a joke, like to imply they want to not just reiterate, but rerererereiterate this information, both because it’s good news and also in light of all the sucky ways AI is being used instead.

        source
  • gmtom@lemmy.world ⁨8⁩ ⁨months⁩ ago

    This is similar to wat I did for my masters, except it was lung cancer.

    Stuff like this is actually relatively easy to do, but the regulations you need to conform to and the testing you have to do first are extremely stringent. We had something that worked for like 95% of cases within a couple months, but it wasn’t until almost 2 years later they got to do their first actual trial.

    source
  • wheeldawg@sh.itjust.works ⁨8⁩ ⁨months⁩ ago

    Yes, this is “how it was supposed to be for”.

    The sentence construction quality these days in in freefall.

    source
  • bluefishcanteen@sh.itjust.works ⁨8⁩ ⁨months⁩ ago

    This is a great use of tech. With that said I find that the lines are blurred between “AI” and Machine Learning.

    Real Question: Other than the specific tuning of the recognition model, how is this really different from something like Facebook automatically tagging images of you and your friends? Instead of saying "Here’s a picture of Billy (maybe) " it’s saying, “Here’s a picture of some precancerous masses (maybe)”.

    That tech has been around for a while (at least 15 years). I remember Picasa doing something similar as a desktop program on Windows.

    source
  • ModerateImprovement@sh.itjust.works ⁨8⁩ ⁨months⁩ ago

    Where is the meme?

    source
  • earmuff@lemmy.dbzer0.com ⁨8⁩ ⁨months⁩ ago

    Serious question: is there a way to get access to medical imagery as a non-student? I would love to do some machine learning with it myself, as I see lot’s of potential in image analysis in general. 5 years ago I created a model that was able to spot certain types of ships based only on satellite imagery, which were not easily detectable by eye and ignoring the fact that one human cannot scan 15k images in one hour. Similar use case with medical imagery - seeing the things that are not yet detectable by human eyes.

    source
  • NuraShiny@hexbear.net ⁨8⁩ ⁨months⁩ ago

    No link or anything, very believable.

    source
  • TCB13@lemmy.world ⁨8⁩ ⁨months⁩ ago

    AI should be used for this, yes, however advertisement is more profitable.

    source
  • mayo_cider@hexbear.net ⁨8⁩ ⁨months⁩ ago

    Neural networks are great for pattern recognition, unfortunately all the hype is in pattern generation and we end up with MRI images of breasts in anime style

    source
  • Slotos@feddit.nl ⁨8⁩ ⁨months⁩ ago

    youtube.com/shorts/xIMlJUwB1m8?si=zH6eF5xZ5Xoz_zs…

    Detecting is not enough to be useful.

    source
  • MonkderVierte@lemmy.ml ⁨8⁩ ⁨months⁩ ago

    Btw, my dentist used AI to identify potential problems in a radiograph.

    source
  • MadBob@feddit.nl ⁨8⁩ ⁨months⁩ ago

    I had a housemate a couple of years ago who had a side job where she’d look through a load of these and confirm which were accurate. She didn’t say it was AI though.

    source
  • Melonpoly@lemmy.world ⁨8⁩ ⁨months⁩ ago

    Can’t pigeons do the same thing?

    source
  • suction@lemmy.world ⁨8⁩ ⁨months⁩ ago

    Wanna bet it’s not “AI” ?

    source
  • elrik@lemmy.world ⁨8⁩ ⁨months⁩ ago

    Ductal carcinoma in situ (DCIS) is a type of preinvasive tumor that sometimes progresses to a highly deadly form of breast cancer. It accounts for about 25 percent of all breast cancer diagnoses.

    Because it is difficult for clinicians to determine the type and stage of DCIS, patients with DCIS are often overtreated. To address this, an interdisciplinary team of researchers from MIT and ETH Zurich developed an AI model that can identify the different stages of DCIS from a cheap and easy-to-obtain breast tissue image. Their model shows that both the state and arrangement of cells in a tissue sample are important for determining the stage of DCIS.

    news.mit.edu/…/ai-model-identifies-certain-breast…

    source
  • probableprotogen@lemmy.dbzer0.com ⁨8⁩ ⁨months⁩ ago

    I really wouldn’t call this AI. It is more or less an inage identification system that relies on machine learning.

    source
  • humbletightband@lemmy.dbzer0.com ⁨8⁩ ⁨months⁩ ago

    Haha I love Gell-Mann amnesia. A few weeks ago there was news about speeding up the internet to gazillion bytes per nanosecond and it turned out to be fake.

    Now this thing is all over the internet and everyone believes it.

    source
  • JimVanDeventer@lemmy.world ⁨8⁩ ⁨months⁩ ago

    The AI genie is out of the bottle and — as much as we complain — it isn’t going away; we need thoughtful legislation. AI is going to take my job? Fine, I guess? That sounds good, really. Can I have a guaranteed income to live on, because I still need to live? Can we tax the rich?

    source
  • cumskin_genocide@lemm.ee ⁨8⁩ ⁨months⁩ ago

    Nooooooo you’re supposed to use AI for good things and not to use it to generate meme images.

    source
  • orphiebaby@lemm.ee ⁨8⁩ ⁨months⁩ ago

    Good news, but it’s not “AI”. Please stop calling it that.

    source
-> View More Comments