• VibeSurgeon@piefed.social
    link
    fedilink
    English
    arrow-up
    87
    ·
    17 days ago

    Case in point: AI models could be written to be more efficient in token use

    They are being written to be more efficient in inference, but the gains are being offset by trying to wring more capabilities out of the models by ballooning token use.

    Which is indeed a form of Jevon’s paradox