I know its a bit of a hot topic but I’ve always seen people (online anyways) are either a hard yes or absolutely no on using AI. There are many types of “AI” that have already been part of technology before this hype, I’m talking about LLMs specifically (ChatGPT, Claude, Gemini, etc…). When this bubble burst its absolutely not going anywhere. I’m wondering if there is case where you’ve personally used it and found it beneficial (not something you’ve read or seen somewhere). The ethics of essentially stealing vast amount of data for training without compensation or enshitification of products with “AI” is a whole other topic but there is absolutely no way that the use of the technology itself is not beneficial somehow. Like everything else divisive the truth is definitely somewhere in the middle. I’ve been using lumo from proton for the last three weeks and its not bad. I’ve personally found it useful in helping me troubleshoot issues, search or just use it to help with applying for jobs:
- its very good at looking past SEO slop plaguing the internet and it just gets me the information I need. I’ve tried alternative search engine (mojeek, startpage, searXNG, DDG, Qwant, etc…) Most of them unfortunately aren’t very good or are just another way to use google or bing.
- I was having some wifi problem on a pc i was setting up and i couldn’t figure out why. i told it exactly what was happening with my computer along with exact specs. It gave gave me some possible reasons and some steps to try and analyze my computer it was very very useful.
- I’ve been applying for so many jobs and it so exhausting to read hundreds of description see one tiny thing in the middle that disqualifies me so I pass it my resume with links and tell it to compare what i say on my resume and what the job is looking for to see if im a fit. When i find a good job i ask rewriting tips to better focus on what will stand out to a recruiter (or an application filtering system to be real).
I guess what I’m trying to say is it cant all be bad.
I had ai write a python script for me that checked a directory for zip/rar files that don’t also have video files, unzip them and if it fails retry at a doubling interval. It would have taken me an hour or so, took the ai 5min+15min of me fixing it. I also had it add logging which is definitely something I wouldn’t have been arsed to add otherwise.
It isnt very helpful at my day job writing professional code though, and I dont think someone who didn’t already know how to do it themselves could make use of it.
It’s got lots of uses:
- driving up fossil fuel revenues
- providing a solid excuse for laying off a bunch of employees
- disciplining labor
- offloading blame for unpopular decisions
- increasing surveillance and nonconsensual data collection
- corporate theft from artists, claiming ‘its just learning data bro’, only to have the output often be 99% identical to the original ‘learning data’
- making fake videos much easier for swift political disinformation campaigns
- LLM voice agents that make scams much easier to perpetuate on the elderly
I’ve used it to help me set up a home server. I can paste text from log files or ask about something not working and it tells me what the problem is. It gets things wrong a lot, but this is the perfect low risk use for AI…for sending me in the right direction when I have no idea why things aren’t working. When it’s completely wrong, it doesn’t really matter.
The real test for AI is: “does it matter when it is completely wrong”. If the answer is yes, then that’s not a suitable use for AI.
This. I’m a software engineer and I also sometimes use it by providing it a problem and asking it for ideas how to solve them (usually with the addition of “don’t write any code”, I do that myself, thanks). It gives me a few pointers that I can then follow. Sometimes it’s garbage, sometimes it’s quite good.
99% garbage.
If you have ever touched C++ you will know that it has godawful errors and not even chatgpt knows what the fuck is happwning
That’s why I’m not asking it to give me actual code I should use, but keep it high level. If it then says there are patterns x,y and z that could be usable, I can look it up myself and also write the code itself. Using it to actually write the code is mostly garbage, yes. And in any case you still need to have an idea of what you’re doing yourself anyway.
No, I’m not asking it to write code, I’m asking it to interpret the error and point to the actual problem in the code. It just can’t…
You know those business books that combine flimsy pop psychology and self help literature with personal development and business goals? Yeah, those books with 300 pages and only one good idea per 100 pages if you’re lucky. Rest of it is just fabricated stories, ideas copied from other books and regurgitation of ideas from the previous chapters to fluff up the page count. Yes, that category!
Well guess what? GPT can generate precisely that level of quality without any effort. In fact, it seems to gravitate towards that style unless you specifically work hard to steer it to aim higher. It has never been easier to become a business book author! Zero editing required. Just prompt and publish.
It feels like this is the one area where GPT truly excels.
Solo roleplay. You can make a character and interact. Generate fake conversations etc.
With generative images you can create custom backgrounds, portraits and landscapes instead of having to lookup for them or doing it yourself.
You can also do some interactive story telling that it’s kind of fun.
Generating quick test questions over a certain topic. It’s another use case I’ve seen it being quite good at.
Wow, what a cool idea, I never even considered this. Any other suggestions to this idea to add some fun?
If you want it to go unhinged try to get an uncensored llm. Dans PersonalityEngine by bartowski is my current favorite.
Try aidungeon - it does exactly this.
I had no idea this existed, my mind is blown. Looking it up later today.
Yeah I think dialogue for videogame characters so they don’t all just repeat the exact same thing again and again would be great.
Works in theory for written dialogue anyway. Spoken would be a bit ropey.
There is a Skyrim mod that does this I believe and it’s pretty decent.
Well colour me impressed!
I self host Deepseek R1 and it’s been pretty helpful with simple Linux troubleshooting, generating bash commands, and even programming troubleshooting. The thinking feature is pretty cool and I do find myself learning stuff from it.
What took it from gimmick to actual nice to have for me is when my jerry rigged home network broke and wouldn’t connect to the internet. Having what is entially an interactive StackOverflow/ServerFault running on a local machine was really helpful.
Running the model locally makes it easier to not overly rely on AI because of the limited token rate.
You self host the full Deepseek R1? What’s your hardware?
Also, you might enjoy !localllama@sh.itjust.works
No I host the 70b version because I’m limited by my RAM.
Creating low-effort images for ideas that don’t warrant effort, like silly jokes.
I see it as a toy. No different from the Slinky or Silly Putty I had as a kid. Just something to play with.
Regarding the job application, most companies and sites are using shitty AI to rummage through the piles of resumes they receive.
The whole job application process is frankly one of the worst real world use of most technologies, not only AI
I find it good for music and film suggestions. You feed it a set of ( I want a suggestion like these ) and it provides a good result.
Also good at building mermaid code for diagrams, just tell it write me mermaid code for this, and drop in a descriptive paragraph, then copy paste the code into mermaid.live
That use case became very useful so there is a paid mermaid page to automate that manual process.
I’d love to have an AI assistant that does shit like call service providers and wait in queue and take care of business for me
You can’t steal data, only illegally copy it. The original data holder still has the data, just you do too.
Inspiration for writing emails, letters, text messages. I always check what the thing wrote though.
There’s only a few use cases where I’ve found I prefer it to doing things the hard way.
- As a thesaurus, since it’s great for going “what’s that one word that sort of means all encompassing, commonly used in reference to research/studies?” and it’ll end up giving me “holistic.”
- As part of other software, such as how Linkwarden automatically tags bookmarks by category when I add them
- Double checking the answers I’ve come up with in regard to hyper-specific questions (usually about how a given piece of software can/can’t be used, or how it’ll interact with something else) just to make sure I’m not blatantly missing anything.
However, I try to avoid using it for anything that otherwise requires productive mental effort, because I find that I end up being a lot more informed and capable if I spend 5 minutes going through sites, learning about a topic, identifying wrong answers, and being able to put together better new queries in the first place, than I do if I ask a chatbot, even if it pulls from those same sources.
When you have a chatbot summarize or combine/condense information, you’ll always lose nuance and additional context, and very frequently that context will actually be helpful to your overall understanding. There’s also many cases where, for example, someone on a forum explains an issue a bit, and their profile has more related information on it that an LLM simply wouldn’t go for, only summarizing from their one response on that page. This can lead me down a rabbit hole that then leads me to finding other good sources. Maybe someone mentions that a particular site is helpful for what I’m looking for, and that then becomes something I use more frequently when I do searches for things, whereas an LLM would have just ignored that comment.
I don’t use it for writing code because that’s what I love but I use it for documentation and other stuff I hate…😂