

There still plenty of “this version of pytorch doesn’t run reliably with Python 3.12, please use 3.10”, though. It’s not all sunshine and roses.


There still plenty of “this version of pytorch doesn’t run reliably with Python 3.12, please use 3.10”, though. It’s not all sunshine and roses.


See, I don’t have to worry about such details. I work in corporate software dev, which means that everything is an MSSQL database where most of the tables contain only an ID of a table-specific format and a JSON blob. Why use an ORM when you can badly reimplement NoSQL in a relational database instead?


Not really; mine was eventually too expensive and I only got that model because a) I could get it for cheaper through a leasing arrangement and b) I don’t need to pay for a car.
I must admit, though, that having a belt drive is extremely nice and worth the money. 10/10, top tier bike component.


I read that there are two “waves” of rapid biomolecular aging in the mid-40s and early 60s. Still affects everyone differently and of course a worn-out body will feel that much worse.
In general, though, our bodies start wearing out in our mid-teens, about a decade before we’re even fully grown! High-frequency hearing is one of the first things to suffer. Bodily decline is really a constant companion in our lives; it only becomes noticeable when it starts accelerating.


And don’t feel bad for getting an e-bike. Riding that is still a good workout if you get into the habit of going fast. E-bikes usually have a hard speed cutoff (25 km/h by law where I live); if you want to go faster it’s all you and the motor is just there to give you better acceleration and take the pain out of things like hills or opposing wind.
If you don’t want to go fast, the bike still expects you to put in a certain amount of work. Low-intensity training is still training. Most crucially, getting that bit of assistance might get you to use the bike when you otherwise wouldn’t, turning no exercise into some exercise.
People underestimate the benefits of light exercise. Even brisk walks or relatively leisurely motor-assisted bike rides can absolutely be beneficial if done regularly.


There another reason for soldering RAM, that being that the signal integrity of DIMMs is too low for use with LPDDR. There a reason why Dell spent R&D money on developing CAMM (and why JEDEC was very happy to refine it into the CAMM2 modules that are entering the market now).
Swappable RAM has its own advantages, such as being able to easily switch suppliers if necessary. I can see modular RAM making a comeback in laptops by means of LPDDR CAMMs.
Our brain generally relies on the first system way more than the second, to the point where what we think of as logical decisions are often actually intuitive ones that we then rationalize after the fact using system 2.
This is basically a power saving trick: Rational thinking uses way more energy than intuition.
GUI disk space analyzers are absolutely amazing.
For those who prefer KDE and/or donut graphs, Filelight has you covered.


Basically, the way DIMMs are built with edge connectors causes trouble with signal integrity. That limits the maximum speed and is one reason why LPDDR is soldered.
CAMMs replace the edge connector with a grid of contacts like on modern CPUs. That’s easier on the signal, allowing for faster speeds and even LPDDR on modules. Downside: They need to lie flat on the mainboard and thus use more space. Then again, laptop RAM is typically mounted in a flat configuration already so it’s mostly a new challenge on the desktop.
In case you’re wondering why they announced CAMM2 and where CAMM1 went: The original CAMM was a proprietary module from Dell; their spec was refined into the JEDEC-standardized CAMM2.


Sweet. I’ve been waiting for CAMM to gain momentum for a while now. Now all that needs to happen is for it to actually be used by OEMs.


It’s true that LLMs (and GANs) are taking over a term that contains a lot of other stuff, from fuzzy logic to a fair chunk of computer linguistics.
If you look at what AI does, however, it’s mostly classification. Whether it’s fitting imprecise measurements into categories or analyzing waveform to figure out which word it represents regardless of diction and dialect; a lot of AI is just the attempt at classifying hard to classify stuff.
And then someone figure out how to hook that up to a Markov chain generator (LLMs) or run it repeatedly to “recognize” an image in pure noise (GANs). And those are cool little tricks but not really ones that solve a problem that needed solving. Okay, I’ll grant that GANs make a few things in image retouching more convenient but they’re also subject to a distressingly large number of failure modes and consume a monstrous amount of resources.
Plus the whole thing where they’re destroying the concept of photographic and videographic evidence. I dislike that as well.
I really like AI when used for what it’s good at: Taking messy input data and classifying it. We’re getting some really cool things done that way and some even justify the resources we’re spending. But I do agree with you that the vast majority of funding and resources gets spent on the next glorified chatbot in the vague hope that this one will actually generate some kind of profit. (I don’t think that any of the companies who are invested in AI still actually believe their products will generate a real benefit for the end user.)


Blockchain is an adequate solution to a problem that already has other, cheaper solutions.
AI is an adequate solution to a problem that has no other similarly adequate solutions (classification of complex information). Unfortunately, all the money is in that solution being applied to problems where it’s not adequate (content generation, user interaction).
Downside: Many companies use open-plan offices, which means it’s too busy to concentrate. So everyone wears noise-cancelling headphones in order to be able to work at all.
The only time I actually felt that being present was a benefit was in a company that had one from for every two people.


I think major factors in people bitching about the Windows 10 EOL is that a) Windows 10 was explicitly marketed as the final version of Windows and b) Windows 11 is so unappealing that even companies are reluctant to upgrade.
Normally, that wouldn’t be a big problem. We had dud releases before. Windows Vista had few friends due to compatibility issues but was workable. Besides, 7 was launched shortly after Vista’s EOL. Likewise, Windows 8’s absurd UI choices made it deeply unpopular but it was quickly followed by 8.1, which fixed that. And Windows 10 again followed shortly after 8’s EOL (and well before 8.1’s).
Windows 11, however, combines a hard to justify spec hike with a complete absence of appealing new features. The notable new features that are there are raising concerns about data safety. In certain industries (e.g. medical, legal, and finance), Recall/Copilot Vision is seen as dangerous as it might access protected information and is not under the same control that the company has over its document stores. That increases the vector for a data breach that could lead to severe legal and reputational penalties.
Microsoft failed to satisfyingly address these concerns. And there’s not even hope of a new version of Windows releasing a few months after 10’s EOL; Windows 12 hasn’t even been announced yet.
It’s no wonder that companies are now complaining about Windows 10’s support window being too short.


Yeah, and in the 70s they estimated they’d need about twice that to make significant progress in a reasonable timeframe. Fusion research is underfunded – especially when you look at how the USA dump money into places like the NIF, which research inertial confinement fusion.
Inertial confinement fusion is great for developing better thermonuclear weapons but an unlikely candidate for practical power generation. So from that one billion bucks a year, a significant amount is pissed away on weapons research instead of power generation candidates like tokamaks and stellarators.
I’m glad that China is funding fusion research, especially since they’re in a consortium with many Western nations. When they make progress, so do we (and vice versa).


At least the fusion guys are making actual progress and can point to being wildly underfunded – and they predicted this pace of development with respect to funding back in the late 70s.
Meanwhile, the AI guys have all the funding in the world, keep telling about how everything will change in the next few months, actually trigger layoffs with that rhetoric, and deliver very little.


A good foam pillow under my head, a bit of my blanket between my knees. Sometimes I think about getting one of those knee pillows but so far I haven’t bothered.
I won’t go back to a down-filled pillow. Those will inevitably stop supporting my head during the night.


I wouldn’t say pure rage… They were certainly high energy but not super focused on being angry. This may in part be due to Fred Durst adding major frat boy vibes.
I have no idea what they’re like these days.


I fully agree. LLMs create situations that our laws aren’t prepared for and we can’t reasonably get them into a compliant state on account of how the technology works. We can’t guarantee that an LLM won’t lose coherence to the point of ignoring its rules as the context grows longer. The technology inherently can’t make that kind of guarantee.
We can try to add patches like a rules-based system that scans chats and flags them for manual review if certain terms show up but whether those patches suffice will have to be seen.
Of course most of the tech industry will instead clamor for an exception because “AI” (read: LLMs and image generation) is far too important to let petty rules hold back progress. Why, if we try to enforce those rules, China will inevitably develop Star Trek-level technology within five years and life as we know it will be doomed. Doomed I say! Or something.
Except if they then have to run it on their machine and the setup instructions start with setting up a venv. I find that a lot of Python software in the ML realm makes no effort to isolate the end user from the complexities of the platform. At best you get a setup script that may or may not create a working venv without manual intervention, usually the latter. It might be more of a Torch issue than a Python one but it still means spending a lot of time messing with the Python environment to get things running.
This may color my perception but the parts of the Python ecosystem I get exposed to as an end user these days feel very hacky. (Not all of it is, though; I remember from my Gentoo days that Portage was rock solid.)