Unicode is a way to encode the things that humans use to write stuff into a computer.
ASCII is for example another way, as is EBCDIC.
All these methods translate squiggles that we’ve used for centuries into something that can be represented inside a computer.
For example, the letter “A” is under ASCII represented by the number 65.
This post is pointing out that there are two characters that look identical, but have different numbers, which means that what the user sees is identical, but what the computer sees is different.
If I don’t understand what’s happening here but want to, should I research Unicode in general or something else?
Unicode is a way to encode the things that humans use to write stuff into a computer.
ASCII is for example another way, as is EBCDIC.
All these methods translate squiggles that we’ve used for centuries into something that can be represented inside a computer.
For example, the letter “A” is under ASCII represented by the number 65.
This post is pointing out that there are two characters that look identical, but have different numbers, which means that what the user sees is identical, but what the computer sees is different.
This is the basis for much tomfoolery.