Toronto Metropolitan University
Browse

Poisoning an already poisoned well

journal contribution
posted on 2024-02-29, 16:05 authored by Angela MisriAngela Misri

In the battle to protect the creations of human minds from A.I. and large language models (LLMs) that threaten to suck those creations in like a whirlpool, and deliver them bottled up as “original” content to the masses—unattributed and unpaid—we must be careful to not poison the well of real and factual content.

History

Language

English

Usage metrics

    Journalism

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC