For the first time, researchers have succeeded in translating silent thoughts in real time using a brain implant coupled with artificial intelligence. This technology promises to offer a new form of…
Making this a privacy issue is about a decade PR two early.
These devices require a surgical implant and extensive training. There’s no “thoughtcrime” potential here. This of for giving paraplegic people, victims of neurological disease, and potentially non-verbal autistic people the ability to speak.
The need to use a “public thought password” is because the AI training and system is not good enough yet to recognize the difference between internal monologue and thoughts directed for others. Likely because people involved in the research have been rendered mute for AI long they don’t know the difference anymore either.
Sure, it means both come from the same source and both can be translated. But in some SciFi future world where the cops plonk a thought helmet on you, just do what I do all the time anyway, and think “boobs boobs boobs…” over and over.
There’s no “thoughtcrime” potential here. This of for giving paraplegic people, victims of neurological disease, and potentially non-verbal autistic people the ability to speak.
Good on you for looking at the upside I suppose. In my view, the framing of this technology as a “medical miracle” is a very convenient smokescreen.
The predictive nature of monitoring your internet and money use is less invasive, easier, cheaper, and overall good enough to build a law enforcement paradigm around it to surveil hundreds of millions of people. Thoughtcrime can be predicated off your searches. Why add extra steps?
You’re missing the entire point. This is complicated and delicate and expensive to directly read thoughts. This is high end stuff.
Meanwhile, in reality…
Google already knows what you are thinking by tracking everything you do online. Well enough that the debate among advertisers from as far back as 2010 was how to not “spook the customer.” A large amount of the “my phone is listening to me!” Is the predictive nature of advertising that tracks everything you do.
There’s 15 years of success in this already. Then - THEN 6 months ago Google started using browser fingerprinting to track everything everything everything you do, everywhere you go, everyone you talk to online. EV.AH.RE.th.ha.iiiiiing. Why spend money to get invasive and weird about it when you’ve got the data you need in high fidelity? Why do anything physical when you get what you need from people’s phones?
Why spend money to get invasive and weird about it when you’ve got the data you need in high fidelity? Why do anything physical when you get what you need from people’s phones?
Same answer as before, to read minds. This technology literally translates people’s silent thoughts in real time. Does anything you’ve mentioned here do that? Sorry but no, not even close.
Do you seriously just not understand? Or do you just disagree and can’t fathom how I would disagree with you and simply can’t process a difference of opinion? I’ve been clear and consistent. I understand your point and don’t agree fully.
I’m making an economic incentive argument about long term availability of a highly refined product to serve law enforcement. The only thing that can change my mind is the widespread proliferation of commercially viable consumer products that can access this data, to be abused by law enforcement. Which may happen, who knows.
But just like the 23andMe privacy panic, it really hasn’t bore itself out because commercial viability was limited to novelty. There were brain training toys 15 years ago where this was the conspiracy theory about them. That fizzled. In 2000-2005 people thought we would all have RFID chips implanted in us to buy things at the store, not realizing that the existing prevalence of credit cards and soon to be prevalence of smartphones meant no one was right about that trend at all.
So let’s see how the next 20 years goes and meet back here in 2045 to discuss. OK?
Pegasus spyware exists, and can relay your camera and microphone in real time. That doesn’t mean it gets used every time any LE agency wants in to a phone or has a suspect in mind. It’s expensive and risky and invasive. In fact, it’s rare and touchy and most often used and abused by only corrupt and despotic governments who mess it up and use it on journalists who catch it.
Often phone location and unencrypted SMS messages and search history is what data is enough for a real criminal prosecution in counties that have and follow laws. So YMMV on that part.
i think the issue comes when the state needs information and they will be willing to surgically implant things in you and force you through a series of tests to establish a baseline.
Not sure if you’ve seen a government lately, but something taking work isn’t how it works. What’s the blunt, immediate, let the lawyers sort it out version? That’s what will be done.
Let’s consider some post 9/11 style Abu Ghraib Prison scenario where captors try this instead of breaking someone with 24 hour music and waterboarding torture. This gets all your thoughts, right? OK, well all my thoughts at that time would be “F you, F you, you’re all liars, I don’t believe anything you say, F you, go jump in the ocean and get eaten by sharks, F you, F you…” You have to think the thing they want you to think for this to work. Anyone anticipating that might be what could happen if detained should know how to control their thoughts. Sing the same 3 songs over and over in your head. Repeat lines from your favorite movies. Think in another language. Name things you can see in the room and think through construction methods of the building. It will very, very quickly prove useless against anyone with even moderate willpower. A genuine sociopath would excel at using it to manipulate their jailers. If anything, the tables have turned and they’re forced to listen to everything you think - now you’re torturing them!
Not sure how you can believe on one hand that there isn’t potential for “thoughtcrime” but also recognize that the technology has a hard time differentiating between internal monologues and thoughts directed for others.
At best it just can’t recognize consent but at worst there will be a point where people can filter for internal thoughts. The more fine tuned this gets the easier it will become to abuse.
Granted I agree with you that the prospect of this being a nefarious technology doesn’t seem imminent and could also have huge upsides if used in good faith.
Let’s say that it’s illegal to own a tiger where you live. You spend a month searching online “how to feed a tiger” “what does a tiger eat” “tiger bedding” and shopping for enclosures. Maybe you even buy tiger enclosure panels and bedding and a blinged-out tiger-sized collar. But never actually buy a tiger. Especially in the most sophisticated surveillance states, that gets close enough that you might get arrested. A crime prediction algorithm would put you as high risk for committing a crime, and if local legislation permits it, you might be charged with a crime you haven’t actually committed. “Conspiracy to endanger wildlife” even though the conspiracy is between you and Google.
However. you’re telling me that I need to be worried about the far-flung future edge case where after going through a surgical procedure, I spend 5 months training an AI tailored to my brain in particular, and then at that point I think “You know, I think I really want to get a tiger” and would be arrested. You’re going to die on the hill that it’s possible. Sure, maybe one day it will be less invasive and the training on your own brain will take hours instead of months. But for now, we have everything that’s needed to do this to a reasonable enough degree that it’s shocking to me how complacent people are.
You can be concerned about a possible future issue and a current issue at the same time. To dismiss a possible issue because you are already concerned with something else seems silly. Maybe I’m misunderstanding what your getting at.
We’re agreeing to disagree, and that’s ok. I don’t think that this will ever get to the point of the scifi plot device you think it will be. It’s too invasive and requires too much training to be effective.
So it needs a lot of development and patience and literal brain surgery on willing subjects to test this specific use. Medical uses will follow their own niche, making this a point where clandestine research well have to fork it for their own purpose soon. The overhead is massive here, pardon the pun.
Meanwhile, an effective method for getting close enough already exists and is used to prosecute people.
Where is the incentive to spent the years of research to develop this specialized tool further, tailored for the “precrime” purpose? Especially when it’s not just quick and easy, but cheap to do this already? There’s no urgency or need to do anything else. This doesn’t do anything we can’t already do.
Edit: You’re suggesting something as invasive as the famously unsuccessful MK Ultra program. Meanwhile, using money and black mail remain widely used ways to manipulate someone. The bar is already set high by low-tech means.
Ooooh, did you listen to the EFF podcast and that’s why you’re all about this?
Making this a privacy issue is about a decade PR two early.
These devices require a surgical implant and extensive training. There’s no “thoughtcrime” potential here. This of for giving paraplegic people, victims of neurological disease, and potentially non-verbal autistic people the ability to speak.
The need to use a “public thought password” is because the AI training and system is not good enough yet to recognize the difference between internal monologue and thoughts directed for others. Likely because people involved in the research have been rendered mute for AI long they don’t know the difference anymore either.
Sure, it means both come from the same source and both can be translated. But in some SciFi future world where the cops plonk a thought helmet on you, just do what I do all the time anyway, and think “boobs boobs boobs…” over and over.
Good on you for looking at the upside I suppose. In my view, the framing of this technology as a “medical miracle” is a very convenient smokescreen.
As if any research is funded to cure anything. Cures are not profitable. Just look at the pharma industry.
It genuinely is a medical miracle, though.
The predictive nature of monitoring your internet and money use is less invasive, easier, cheaper, and overall good enough to build a law enforcement paradigm around it to surveil hundreds of millions of people. Thoughtcrime can be predicated off your searches. Why add extra steps?
It’s in the title, to read minds.
You’re missing the entire point. This is complicated and delicate and expensive to directly read thoughts. This is high end stuff.
Meanwhile, in reality…
Google already knows what you are thinking by tracking everything you do online. Well enough that the debate among advertisers from as far back as 2010 was how to not “spook the customer.” A large amount of the “my phone is listening to me!” Is the predictive nature of advertising that tracks everything you do.
There’s 15 years of success in this already. Then - THEN 6 months ago Google started using browser fingerprinting to track everything everything everything you do, everywhere you go, everyone you talk to online. EV.AH.RE.th.ha.iiiiiing. Why spend money to get invasive and weird about it when you’ve got the data you need in high fidelity? Why do anything physical when you get what you need from people’s phones?
Same answer as before, to read minds. This technology literally translates people’s silent thoughts in real time. Does anything you’ve mentioned here do that? Sorry but no, not even close.
Do you seriously just not understand? Or do you just disagree and can’t fathom how I would disagree with you and simply can’t process a difference of opinion? I’ve been clear and consistent. I understand your point and don’t agree fully.
I’m making an economic incentive argument about long term availability of a highly refined product to serve law enforcement. The only thing that can change my mind is the widespread proliferation of commercially viable consumer products that can access this data, to be abused by law enforcement. Which may happen, who knows.
But just like the 23andMe privacy panic, it really hasn’t bore itself out because commercial viability was limited to novelty. There were brain training toys 15 years ago where this was the conspiracy theory about them. That fizzled. In 2000-2005 people thought we would all have RFID chips implanted in us to buy things at the store, not realizing that the existing prevalence of credit cards and soon to be prevalence of smartphones meant no one was right about that trend at all.
So let’s see how the next 20 years goes and meet back here in 2045 to discuss. OK?
This technology is capable of translating a person’s silent thoughts in real time. I’m sorry but nothing you’ve discussed comes even close to that.
It’s a simple cost benefit comparison.
Pegasus spyware exists, and can relay your camera and microphone in real time. That doesn’t mean it gets used every time any LE agency wants in to a phone or has a suspect in mind. It’s expensive and risky and invasive. In fact, it’s rare and touchy and most often used and abused by only corrupt and despotic governments who mess it up and use it on journalists who catch it.
Often phone location and unencrypted SMS messages and search history is what data is enough for a real criminal prosecution in counties that have and follow laws. So YMMV on that part.
i think the issue comes when the state needs information and they will be willing to surgically implant things in you and force you through a series of tests to establish a baseline.
That’s expensive and takes work.
Not sure if you’ve seen a government lately, but something taking work isn’t how it works. What’s the blunt, immediate, let the lawyers sort it out version? That’s what will be done.
Let’s consider some post 9/11 style Abu Ghraib Prison scenario where captors try this instead of breaking someone with 24 hour music and waterboarding torture. This gets all your thoughts, right? OK, well all my thoughts at that time would be “F you, F you, you’re all liars, I don’t believe anything you say, F you, go jump in the ocean and get eaten by sharks, F you, F you…” You have to think the thing they want you to think for this to work. Anyone anticipating that might be what could happen if detained should know how to control their thoughts. Sing the same 3 songs over and over in your head. Repeat lines from your favorite movies. Think in another language. Name things you can see in the room and think through construction methods of the building. It will very, very quickly prove useless against anyone with even moderate willpower. A genuine sociopath would excel at using it to manipulate their jailers. If anything, the tables have turned and they’re forced to listen to everything you think - now you’re torturing them!
Not sure how you can believe on one hand that there isn’t potential for “thoughtcrime” but also recognize that the technology has a hard time differentiating between internal monologues and thoughts directed for others.
At best it just can’t recognize consent but at worst there will be a point where people can filter for internal thoughts. The more fine tuned this gets the easier it will become to abuse.
Granted I agree with you that the prospect of this being a nefarious technology doesn’t seem imminent and could also have huge upsides if used in good faith.
I’m trying to scream from the rooftops that the horse is already out of the barn. We’re PAST this point already with existing technology.
Let’s say that it’s illegal to own a tiger where you live. You spend a month searching online “how to feed a tiger” “what does a tiger eat” “tiger bedding” and shopping for enclosures. Maybe you even buy tiger enclosure panels and bedding and a blinged-out tiger-sized collar. But never actually buy a tiger. Especially in the most sophisticated surveillance states, that gets close enough that you might get arrested. A crime prediction algorithm would put you as high risk for committing a crime, and if local legislation permits it, you might be charged with a crime you haven’t actually committed. “Conspiracy to endanger wildlife” even though the conspiracy is between you and Google.
However. you’re telling me that I need to be worried about the far-flung future edge case where after going through a surgical procedure, I spend 5 months training an AI tailored to my brain in particular, and then at that point I think “You know, I think I really want to get a tiger” and would be arrested. You’re going to die on the hill that it’s possible. Sure, maybe one day it will be less invasive and the training on your own brain will take hours instead of months. But for now, we have everything that’s needed to do this to a reasonable enough degree that it’s shocking to me how complacent people are.
You can be concerned about a possible future issue and a current issue at the same time. To dismiss a possible issue because you are already concerned with something else seems silly. Maybe I’m misunderstanding what your getting at.
We’re agreeing to disagree, and that’s ok. I don’t think that this will ever get to the point of the scifi plot device you think it will be. It’s too invasive and requires too much training to be effective.
So it needs a lot of development and patience and literal brain surgery on willing subjects to test this specific use. Medical uses will follow their own niche, making this a point where clandestine research well have to fork it for their own purpose soon. The overhead is massive here, pardon the pun.
Meanwhile, an effective method for getting close enough already exists and is used to prosecute people.
Where is the incentive to spent the years of research to develop this specialized tool further, tailored for the “precrime” purpose? Especially when it’s not just quick and easy, but cheap to do this already? There’s no urgency or need to do anything else. This doesn’t do anything we can’t already do.
Edit: You’re suggesting something as invasive as the famously unsuccessful MK Ultra program. Meanwhile, using money and black mail remain widely used ways to manipulate someone. The bar is already set high by low-tech means.
Ooooh, did you listen to the EFF podcast and that’s why you’re all about this?