• 0 Posts
  • 54 Comments
Joined 10 months ago
cake
Cake day: February 6th, 2025

help-circle

  • For the vast majority of these companies, probably not.

    If the company is AI-only, then if/when the bubble bursts, I suspect it’ll go under too. Only the biggest players will survive that, like OpenAI, since so many other services call out to their API.

    Companies that can pivot back to core markets will be fine, Google, Microsoft. Shovel sellers will mostly be okay too. What that’ll look like for them is a period of huge overvaluation and then a return to sanity, you can see similar histories if you look at the stock price of still extant dotcom bubble companies.

    And then the hype will be over, there will be a huge crater left in GDP, retirement accounts, and the larger economy, but some “AI” technology will remain — stuff that is actually useful, like transcription, natural speech, noise removal, automated rotoscoping. But the fantasy of replacing information workers and artists will not come to pass, though they probably won’t differentiate for the next several years, as the jobs market is decimated all the same by speculation hangover.






  • It will take a prompt, then run multiple web searches to get relevant info, recursively if needed, and then give a meaningful response with citations.

    Do you have an example of this you can provide verbatim?

    I’m just curious; I think the one application LLMs might actually be viable for is exactly this kind of connection finding in a large corpus, and since I’m doing lots of research, I might actually find personal utility.


  • The AI assistant is backed by several different models which IIRC just call out to those providers (Op*nAI API, etc) and rack up tokens in the billing system: image You might be right that the AI cost is included when below the plan price — to that I have to say, give me a fucking cheaper plan that doesn’t implicitly include the cost of AI.


  • As far as I can tell, you have to completely disable all keyboard shortcuts or else when you press A anywhere that isn’t the search box you get dumped immediately into their AI assistant prompted with whatever you already had in the search bar.

    It didn’t track me more than a few pennies, but on principle the several times that happened made me angry.

    Apparently some of the news views in search are also easy to dump you into AI land. There’s community CSS add on that hides all that stuff now, but I wish the company would let me just disable the AI traps.


  • I use Kagi because the truth is all other corporate alternatives at this point are unusable swill.

    That said, I do not like the company and disagree with their choices in many aspects.

    For one, while they don’t force you to use AI features, there isn’t a way to explicitly turn them off for your account, there always the opportunity to rack up token costs if you accidently hit one of the AI buttons.

    They still don’t run their own index, instead complacent to just pay the other search providers. Additionally, if you’re trying to escape Google… Kagi runs on Google Cloud Services.

    There’s more complaints, and I’m sure others will chime in, but that’s my take.


  • It’s funny because I’ve had this one particular issue with two Asus routers that I manage for family…

    They use this plunger power button design, you push the button in and it toggle locks in to place, the problem is that after a few years whatever mechanism retains the plunger fails and it always springs back and keep the device from staying on. So far the solution has been to cram a paper clip down the housing to hold it in. I just find comedy in having to apply that fix twice.




  • Two V-Cache CCDs just don’t make sense for consumer use-cases is the point here.

    Because of the latency topology you’d only see a benefit if both CCDs were running independent programs that are more strictly sensitive to latency than compute throughput. That’s a very niche subset of uses, and like I said, ideal for a GSP deployment. This should be a product, but in the EPYC 4005 family.

    Otherwise, you’re better off with some sort of heterogeneous topology, one can imagine an 8 core V-Cache CCD paired with a 16 c-core CCD design (this is very roughly what Intel is pursuing with upcoming products) which offers compelling utility even in the consumer space.