KB5077181 was released about a month ago as part of the February Patch Tuesday rollout. When the update first arrived, users reported a wide range of problems, including boot loops, login errors, and installation issues.
Microsoft has now acknowledged another problem linked to the same update. Some affected users see the message “C:\ is not accessible – Access denied” when trying to open the system drive.
There must be something really seriously wrong at Microsoft. I can understand that Windows patches are complex and that they might break some of those crazy things people are running on their machines. But how is a bug that is killing access to the C:\ drive able to get through testing? WTF are they doing?
It’s going to come out that there’s AI in the code. And the code testing was done by AI, who gave the buggy code the green light.
Or worse: AI is doing the QA as well
What QA? Microsoft’s QA was always the CEO demoing the latest repository head on stage.
They at least used to be embarassed by a live BSOD.
We’re doing the QA.
“Code Testing” = QA
It passed the unit test, it must be good!
Quality Artificial Intelligence assurance
They don’t need testing because they tell the ai to not make any errors
And then the LLM says something like “You’re absolutely right, there was an error in that code that is clear and obvious now it has been pointed out and despite the fact you gave the instruction to make no errors. Is there anything else I can help with?”
… and they’ll be too blind to take that as the warning it is and continue to ask even more of the LLM.
my boss loves AI and he uses it for everything. he made some stats graphs and summaries, and he was bragging how he got AI to make them errorless: he tells it to check for errors and makes it swear it’s accurate… while we were looking at a graph where the y column numbers were all fucked up
Interestingly, AI is actually pretty good at making graphs, the trick is you don’t ask it to actually make the graph itself. Instead you have to ask it to write a python script to create a graph using matplotlib from whatever source file contains the data, then run that script. Same with math. Don’t ask it to do math directly, instead ask it to write a bash or python script to do some math, then run that. Still not perfect, but your success rate increases by about 1000%
Because of so much open source and stack overflow it was trained on.
But who writes bash scripts to do math?
But who writes bash scripts to do math?
A full script? Nobody. But you can just run it interactively on the command line, which a lot of AI clients have access to. bc works great for basic math in the shell.
That’s about 90% of what I use AI for right there: silly little bash and python scripts. A graph, some image compression, ffmpeg video shenanigans, the works.
It’s really really bad at doing spreadsheet analysis. Even basic shit that I would give to an intern. At least an intern with generally just make shit up and pretend it’s not wrong even when I point it out, and if they do I get a new intern.
It’s Microslop. This is what’s wrong. Also, that they fired too much of the testing staff in favor of (user-)testing rings.
It’s not as bad as that time they permanently deleted user documents and photos.
See they had this trick where if you didn’t have enough space on your drive to unpack an update, they’d just move your shit to OneDrive temporarily, then move it back when the update was done. Only they forgot to move it back, and lost it. Oops.
Seriously?!?! 😲
No, but are we gonna wait until they do?
My company is starting to roll out having AI both put up PRs AND give code reviews.
I would not be surprised to hear Microslop is doing the same thing and having horrible results.
Amazing what happens when you try to turn your talent pool into lifeless casino monitors.
No one smart is going into windows dev in 2026. It’s like working on IBM mainframes. Only people left to work are middle of the road new grads they hire and boomers who are retiring.
Vibecoding. Microslop has peddled AI so much that they have gotten addicted to their own supply.
Probably AI code getting tested by AI.
*Microslop
Let’s start calling it what it is

Bonjour
That’s more of an apple thing
Saluton
Let’s not pretend that Linux is without bugs.
Let’s not pretend that Microslop is capable of producing good software.
I don’t know about that, XP, 2000 and 7 was pretty solid.
That was Microsoft. Microslop lacks that ability.
Fair point!
Not gonna mention Windows 8? Hmm I wonder why…
Or vista lol, or windows 98 that was so bad they essentially recalled it and re-released it as a second version?
Everyone forgets MILENNIUM!
Because of the trauma.
I used 98 as a teen, it came pre installed, what was wrong with it compared to 95? Asking out of curiosity.
Man it’s been a long time but essentially 98 was the first one to allow for plug and play without drivers if I’m remembering right. That and a few other stability issues made the original crash constantly, including during the demo at a tech show. They re-released it as a second edition that fixed most of it. If you bought the computer towards 1999 they had fixed it.
Because it was shit.
I never claimed that everything MS did was good
XP was probably their most solid OS. And that shouldn’t be a brag.
Reputation is such a strange phenomenon. XP was considered a disaster at launch. It took them years to repair everything that didn’t work.
The rollout of 64 bit architecture support was so sloppy that people were holding on to old hardware so as to not have to install the x64 version of XP. The premiere of the NT kernel meant that nothing had drivers, most software wasn’t compatible yet. DirectX 9 broke half of old games compatibility. There were also two entirely different versions of the shell with dramatically different start menus. Some versions didn’t support multi core CPUs.
Not to mention that XP actually spans three different OSs. Upgrades were just a reinstall wizard of the OS.
It wasn’t until the end life of XP and the launch of Vista that people started to cling to XP SP2 and its reputation switched due to a mix of nostalgia and fear of the much worse launch of Vista.
deleted by creator
I think .net is pretty good. I don’t use it, but people seem to love VS Code.
It’s a lot easier to accept bugs when you’re not paying for it, it’s not spying on you, it lets you do what you want, and it respects your freedom.
It is a hell of a lot less buggy
And the bugs that are there we are aware of. Microsoft may or may not fix severe security bugs, opting to hide the information instead because it’s better for their bottom line
Microsoft always had been a bug riddled mess that people paid for and then they needed to pay even more to be able to get their shit still working
Now with the AI slop apparently contributing 30% of the code, things have gone off a cliff
So no, nobody is pretending Linux is bug-less, it’s just that Microsoft is that bad
It doesn’t lock you out of your C: drive
Also remember that it’s called C drive because your A and B drives are still floppy drives in 2026
In 20 years I never had a system-breaking (or really even any noticeable) bug from an update.
Cool anecdote
I know, right?
The downvotes for this little nugget of truth suggest to me that linux fans are somewhat cultish.
Yeah, I made my comment as I am tired of fanboyism, I have daily driven Linux in the past, I was the Linux sysadmin at a major financial institution for years, Linux is awesome!
But please don’t get arrogant and claim it is faultless, with constructive criticism it can only get better.
Right now I am running Windows as my daily, and my work is only in Windows.
I dailied Linux back in the 2.8 days, I remember a class mate having to manually edit the kernel source code to get his USB mobile broadband modem to work, I had modems from another brand, so I only had to run USB mode switcher to get mine working.
I set up Fluxbox from scratch to get a fantastic UI experience on my laptop.
I know Linux.
I switched back to Windows for gaming, and now with W11 and gaming support for Linux, I am looking to move back to Linux.
I am no Windows nor Linux fanboy.
It’s not so much Linux fanboyism as it is Windows (whatever the total polar opposite of fanboyism is)
The only good argument for Windows is specific software compatibility. If there were equivalent solutions on both for everything, it is an absolute truth that Windows is worse.
That is not an opinion, outside of intentionally wanting to be commercially oppressed.
Also games access to your kernel just screams to me “I wanna have fun and don’t care about security at all, now gimme my fortnite vbux mom” in the most middle-school voice possible.
Also games access to your kernel just screams to me “I wanna have fun and don’t care about security at all, now gimme my fortnite vbux mom” in the most middle-school voice possible.
Wow, how quickly people forget…
Back in 2011, with kernel 2.8.x, gaming on Linux was nothing like it is today, it required dedication, skills and time.
And at the time I didn’t have the energy to deal with it.
Not sure how that’s related to what I said
Hello there
CeilingPenguin is watching you masturbate
I like how, once AI is invented, there is never a problem that isn’t AI related.
Microsoft made broken shit before AI, it isn’t like they suddenly lost that capability once AI was invented.
It’s more like the old adage but extended: “To err is human, to really foul things up you need a computer, but to make an unbelievable mess you need an AI.”
That is certainly true and may very well be the case here.
It could also be the case that a human developer forgot to bounds check an array and iterated out of bounds, corrupting some important kernel variable. We won’t know unless we get a postmortem.
AI enables them to automate the generation of shitty code for broken systems even more efficiently
*Microslop
Let’s start calling it what it is
I use Linux exclusively, my family’s laptops are all Linux, I self-host, etc. I’m no Microsoft fanboy, so believe me when I tell you…
…that is a stupid name and anyone using it sound like a clown.
AI’s use in industry is destructive to knowledge workers, the massive dump of capital in the computer hardware markets have caused massive disruption in secondary markets and the coming market crash will affect everyone in the world. There are plenty of easy arguments to be made against using AI.
Going into a comment section and posting “Well, acktually, you mean MicroSLOP!” does none of that. It’s performative, not substantive.
But there weren’t that many bugs.
That seems like an easy statement to prove. How many bugs were there before AI vs after?
I may be wrong, but I would guess that you haven’t seen any data to back up your statement and you’re basing it on your perception based on social media posts.
You see a lot of clickbait articles where the author highlights a specific patch note or vulnerability and tries to tie that to AI. They’re doing that to earn revenue because anti-AI posts get traffic… they’re not trying to objectively inform you about the rate of bugs in Microsoft’s products. Your perception is being skewed by selection bias.
I would guess that you haven’t seen any data to back up your statement and you’re basing it on your perception based on social media posts.
Well, that’s certainly what you’re doing at least.
You think I’m basing my perception based on a social media post? That’s very observant.
You’re right.
I am responding to a social media post and so my perception of that social media post is based on a social media post (specifically the one that I’m responding to).
The difference between my comment and their comment is that they present their statement as a fact and I indicate uncertainty.
I don’t know the person, I may be wrong and they may have the statistics to back up their fact claim. Since I didn’t know for sure I wrote:
I may be wrong, but I would guess
This indicates that I am not confident in my answer but it is the current top hypothesis among many.
I assume (<- see, indicating uncertainty) that they don’t have this data and are simply making it up.
As far as WHY they are making it up
Considering that social media is the top news source for most people. (Since this is a fact claim, here is a source: https://www.niemanlab.org/2025/06/for-the-first-time-social-media-overtakes-tv-as-americans-top-news-source/). If you don’t know about a person you have to assume an average person. An average person is more likely to receive their news from social media.
I don’t think it’s uncontroversial to say that AI is a divisive topic online and so guessing that this person’s perceptions are built on misinformation about AI posted on social media seems to be a pretty rational conclusion based on the facts that I have before me.
You sure love your weasel words.
I think maybe you don’t know what ‘weasel words’ mean.
From Wikipedia:
In rhetoric, a weasel word, or anonymous authority, is a word or phrase aimed at creating an impression that something specific and meaningful has been said, when in fact only a vague, ambiguous, or irrelevant claim has been communicated. The terms may be considered informal. Examples include the phrases “some people say”, “it is thought”, and “researchers believe”. Using weasel words may allow one to later deny (a.k.a., “weasel out of”) any specific meaning if the statement is challenged, because the statement was never specific in the first place.
There’s none of that here.
Summary review:
The passage does not contain significant weasel words. It acknowledges uncertainty explicitly with phrases like “I may be wrong,” “I would guess,” and “I assume,” which actually counteract weasel wording by qualifying claims. The author distinguishes between fact and opinion, admits lack of knowledge about the individual, and provides a source for a factual claim about social media as a news source. Overall, the language is transparent about uncertainty rather than using vague or evasive phrasing to appear more confident than warranted.
Dear lord. You’ve upgraded to weasel paragraphs.
Huh, my computer doesn’t seem to be affected.
I’m using Arch, btw.
I think I’m affected because I can’t access the C: Drive.
I’m using Debian, btw.
I think I’m affected because I can’t locate a c: drive.
I’m using Mint, BTW.
I managed to park over half the c:drive. I drive an X5, BMW.
I can’t spot my c: from this altitude, in my G6
greetings from the International Space Station
There’s your problem. You should be using Arch.
I use slackware, btw.
Seems like your pc isn’t affected because you don’t have a C drive? Try create a C drive and see if there’s an issue.
mount c:\windows /root
Fedora’s better ;) ~I posted this in a jocular mood, don’t take it seriously~
Microsoft believes the issue may be related to the Samsung Share application, although the exact cause has not yet been confirmed.
30percentofcodewrittenbyai.jpeg
Who are we kidding that number is outdated at this point. Probably 40% now given the increase in ridiculous bugs.

They need to rapidly reduce the complexity of their software if they want to get this under control. The answer is NOT to add more features, it’s to simplify things.
They aren’t capable of doing that.
Source on that is me, I worked for MSFT during the rollout of Windows 8 and the 360 red ring nightmare.
They’re internally wayyyyy too culty and cliquey.
Everyone has to do things the MSFT way, and the MSFT way is team leads all leading their own thing and arguing about why its so cool and necessary.
The culture is diametrically opposed to simplifying things and reorienting around a fundamentally minimized, more stable core system.
Everything has to be able to plug into as many other things as possible, which creates insane nested dependency loops and chains that they fuck up all the time.
The Red Ring of Death was around 2009 or so, Windows 8 rolled out in 2012. That was 14 years ago now.
- 2012: ~94,000 employees.
- 2026: ~228,000 employees.
The median tenure of employment at Microsoft is 5.3 years, so most of those 94,000 employees will be long gone by now.
Also, Satya Nadella took over in 2014 and made major changes to the corporate culture from the earlier Balmer era. Stack ranking was abolished, there were major corporate acquisitions that brought external corporate cultures inside (eg LinkedIn, GitHub, ZeniMax/Bethesda, Activision Blizzard, Nokia).
I have no idea what Microsoft’s current internal culture is like but I think your impression is likely quite outdated by this point.
I mean, do those headcount numbers count contractors?
V dashes? A dashes? Etc?
The majority of MSFT’s workforce has been temp contractors for a very long time, and they do everything they can to have as few actual employees as possible.
If your source is saying the average tenure is 5.3 years, no, no it is not counting contractors.
Beyond that, unless you have an actual source for the culture shift beyond ‘you think so’, I’m going with no, everything I’ve described has gotten worse.
That’s why I’ve, for years, been able to predict moves that MSFT would likely make, that people at the time think are ludicrous or incredibly pessimistic, worst case scenarios… and then they happen.
As an example, I was saying MSFT probably just set Xbox with impossibly unrealistic profit targets for the Xbox/Gaming division, to more or less intentionally downsize it and then basically kill it off, over time, while acting like that’s not what they were doing… I said that a good deal of time before the news broke, that is exactly what happened.
They are a gormless faceless machine that is unimaginably high on their own supply.
Given the amount of outright caste based racism I saw amongst Indian actual employees vs Indian contractors when I was there, where HR told.me that actually ‘that’s just their culture’ and that I was being racist for pointing out abusive managers literally screaming at their lower caste underlings, whom they had by their H1-B balls…
…yeah, I’m willing to bet it is now even worse.
I’ve also worked at other large corps, a place or two in fairly high responsibilty positions.
I’ve met a fair deal of the Seattle/Bellevue/Redmond upper management of various texh and other firms, and the thing they all have in common is an unimaginably inflated ego, elitist attitude, that propogates downward via basically an essentially religious level of respect for people in higher positions… its just expected to be shown by anyone under them.
They really are like the corpos from Cyberpunk 77, they just don’t have the nakedly open bloodlust most of those corpos do.
outright caste based racism I saw amongst Indian actual employees vs Indian contractors
I worked at the Comcast Center back when Comcast was building a new skyscraper two blocks away. The reason for this new building was that the CC had become jam-packed with InfoSys contractors, like literally five or six developers jammed into one-person offices, floor after floor after floor. The executives – a majority of whom were Indian-American – wanted to see a lot fewer Indians around their headquarters (and they talked about this openly) so they built an entire new building to house them.
I mean, do those headcount numbers count contractors?
I linked the source for my information, feel free to dig into it for more detail.
Beyond that, unless you have an actual source for the culture shift beyond ‘you think so’
I literally said I didn’t know whether there had been a culture shift. Reread the last line of my comment. All I’m doing here is pointing out that your own view on the subject is likely very outdated at this point. There’s been enormous employee turnover, including right to the top CEO position, and other major companies have merged into Microsoft in the interim.
I’m willing to bet it is now even worse.
Based on?
I’m not doubting your previous experience. I’m just pointing out that it was a long time ago and a lot has happened since then, so I’d like to hear some more recent evidence.
Why use their stock ticker instead of their name?
Force of habit, shorter to type, everyone knows what I mean.
EDIT:
It took me an embarassingly long amount of time to realize that does not work with 3RR.
That was the internal code in a fair number of processes, for referring to ‘The Red Ring of Death’, the 3 red lit segments of a 360 that means basically 95% chance its gotta be RMA’d, likely just wholly replaced.
Great idea, I’ll ask Copilot to do that
That was the state of windows in 2005
Never again, Windows.
Microslop.
We just had this month’s Patch Tuesday and they’re still dealing with problems caused by last month’s?! I really need to try harder to convince my father putting Linux on his current computer is a better idea than buying a Windows 11 computer.
Install Linux. Problem Solved.
It’s hilarious that the issues people think Linux has, like for example the disk deleting itself, are exactly what happens on Windows lol.
It’s hilarious how far you had to jump to land on that conclusion.
There was a story going around back in September ago about the person whose wife used OneDrive on her phone. It had taken upon itself to copy 25+GB of data on the phone into OneDrive, despite only having the free account tier, and copying it to their Windows 11 PC. There it completely filled up its small SSD boot drive, putting it into a condition of extremely low disk space, which in made it impossible for Windows to boot. Here it is.
I doubt that story. I’m a linux user at home but at work I admin windows and linux systems. I can see his logic because hes thinking how I would. But windows doesnt behave like that. On linux you can fill a drive and get issues booting but windows leaves space so that even when the user drive is full the system can still create temp files needed for operation. Whatever he did trying to get around the default behavior he misconfigured something
I dunno? It sounds very plausible, exactly the kind of thing that Windows would do. I posted about it to Metafilter some time back and no one there seemed to think it couldn’t happen.
It sounds like user error to me. There is like 2 settings on onedrive and they couldnt even be bothered to configure it yet hes going through all this complicated troubleshooting.
If you can’t log into Windows you can’t change its OneDrive settings! What’s more, the user had no idea what was causing the problem, be it OneDrive or something else, until he did that troubleshooting! And, just setting up a new phone shouldn’t make your computer unbootable for any reason! Geez, way to victim-blame there.
He wasnt able to log in because he broke something on the back end not because of onedrive.
Bunch braindead vibe coders at fault I bet
Sounds like they let AI touch it
Ugh… I’m so tired of “microslop” and “AI slop”.
I’m not defending Microsoft in any way, but they were releasing buggy updates long before the rise of AI.
You know what’s going on inside the large companies that are hoping to cash in on the AI thing? All workers are being pushed to use AI and goals are set that targets x% of all code written be AI-generated.
And AI agents are deceptively bad at what they do. They are like the djinn: they will grant the word of your request but not the spirit. Eg they love to use helper functions but won’t necessarily reuse helper functions instead of writing new copies each time it needs one.
Here’s a test that will show that, with all the fancy advancements they’ve made, they are still just advanced text predictors: pick a task and have an AI start that task and then develop it over several prompts, test and debug it (debug via LLM still). Now ask the LLM to analyse the code it just generated. It will have a lot of notes.
An entity using intelligence would use the same approach to write the code as it does to analyze it. Not so for an LLM, which is just predicting tokens with a giant context window. There is no thought pattern behind it, even when it predicts a “thinking process” before it can act. It just fits your prompt into the best fit out of all the public git depots it was trained on, from commit notes and diffs, bug reports and discussions, stack exchange exchanges, and the like, which I’d argue is all biased towards amateur and beginner programming rather than expert-level. Plus it includes other AI-generated code now.
So yeah, MS did introduce bugs in the past, even some pretty big ones (it was my original reason for holding back on updates, at least until the enshitification really kicked in), but now they are pushing what is pretty much a subtle bug generator on the whole company so it’s going to get worse, but admitting it has fundamental problems will pop the AI bubble, so instead they keep trying to fix it with bandaids in the hopes that it’ll run out of problems before people decide to stop feeding it money (which still isn’t enough, but at least there is revenue).
You’re spot on regarding how AI operates.
AI is stupid story time!
I recently helped a friend with a self-hosted VPN problem. He had been using a free trial of Gemini Pro to try to fix it himself but gave up after THREE HOURS. It never tried to help him diagnose the issue, but instead kept coming up with elaborate fixes with names that suggested they were known issues, like The MTU Traffic Jam, The Packet Collision Quandary, and, my favorite, The Alpine Ridge Controller Trap. Then it would run him through an equally elaborate “fix”. When that didn’t work, it would use the failure conditions to propose a new, very serious sounding pile of bullshit and the process would repeat.
I fixed it in about fifteen minutes, most of that time spent undoing all the unnecessary static routing, port forwarding, and driver rollbacks it had him do. The solution? He had a typo in the port number in his peer config.
I can’t deny that LLMs are full of useful knowledge. I read through its output and all of its suggestions absolutely would have quickly and efficiently fixed their accompanying issue, even the thunderbolt/pcie bridging issue, if the real problem had been any of them. They’re just garbage at applying that information.
Yeah, they don’t do analysis but can fool people because they can regurgitate someone else’s analysis from their training data.
If could just be matching a pattern like “I have a network problem with <symptoms>. Your issue is <problem> and you need to <solution>.” Where the problem and solution are related to each other but the problem isn’t related to the symptoms, because the correlation with “network problem” ends up being stronger than the correlation with the description of the symptoms.
And that specific problem could likely be solved just by adding a description of that process to the training data. But there will be endless different versions of it that won’t be fixed by that bandaid.
Now ask the LLM to analyse the code it just generated. It will have a lot of notes.
Not only will it have a lot of notes, every time you ask if to analyze the code it will find new notes. Real engineers are telling me this is a good code review tool but it can’t even find the same issues reliably. I don’t understand how adding a bunch of non-deterministic tooling is supposed to make my code better.
Though on that note, I don’t think having an LLM review your code is useless, but if it’s code that you care about, read the response and think about it to see if you agree. Sometimes it has useful pointers, sometimes it is full of shit.
So when do I stop asking the LLM to take another look? If it finds a new issue on the second or third or fourth check am I supposed to just sit here and keep asking it to “pretty please take another look and don’t miss anything this time”?
I’m not saying it’s a useless tool, it’s just not a replacement for a human code review at all.
Stop when you feel like it, just like any other verification method. You don’t really prove that there are no problems with software development, it’s more of a “try to think of any problem you can and do your best to make sure it doesn’t have any of those problems” plus “just run it a lot and fix any problems that come up”.
An LLM is just another approach to finding potential problems. And it will eventually say everything looks good, though not because everything is good but because that happens in its training data and eventually that will become the best correlated tokens (assuming it doesn’t get stuck flipping between two or more sides of a debated issue).
That sounds worse than useless. It would be better to fail utterly than make up shit that you have to waste time parsing through.
It helps in the sense of once you’ve looked at code enough times, you can stop really seeing it. So many times I’ve debugged issues where I looked many times at an error that is obvious in hindsight but I just couldn’t see it before that. And that’s in cases where I knew there was an issue somewhere in the code.
Or for optimization advice, if you have a good idea of how efficiency works, it’s usually not difficult to filter the ideas it gives you into “worthwhile”, “worth investigating”, “probably won’t help anything”, and “will make things worse”.
It’s like a brainstorming buddy. And just like with your own ideas, you need to evaluate them or at least remember to test to see if it actually does work better than what was there before.
They’ve earned that name at this point
I agree, but if microslop can be the downfall of microslop I will jump on the bang wagon. I think they should add more IA. Did they try live GenIA update of the user system yet ? Sound a money making idea.
Are you having a stroke?
Yes but any specific part in mind ?
Have someone call 911 for you.
It’s because they got rid of testing and quality control. They are only doing minimal testing now in controlled environments while the world is messy.





















