

40·
10 days agoThe problem is that some small but non-zero fraction of these bugs may be exploitable security flaws with the software, and these bug reports are on the open internet. So if they just ignore them all, they risk overlooking a genuine vulnerability that a bad actor can then more easily find and use. Then the FOSS project gets the blame, because the bug report was there, they should have fixed it!
TL;DR: While governments are putting out assurances AI won’t make the final decision to launch nuclear weapons, they are tight-lipped about whether they are putting AI in the information gathering and processing components that advise world leaders making the decision to launch nuclear weapons. In risk assessment, there’s little difference between wrong AI making the launch decision and a human informed by wrong AI making the launch decision.