IMPORTANT NOTES (PLEASE READ!):
- These are NOT products. They are for testing and demonstration purposes only.
- They have NOT been reviewed or audited. Do NOT use for sensitive data.
- All functionality demonstrated is experimental.
- These are NOT meant to replace robust solutions like VeraCrypt, Simplexchat, Signal, Whatsapp, wetransfer. It’s a proof-of-concept to show what’s possible with browser APIs.
- Cyber security is full of caveats, so reach out for clarity on any details if they can’t be found in the docs.
Aiming to create the worlds most secure messaging app.
https://positive-intentions.com/docs/projects/chat
- Open Source
- Cross Platform
- PWA
- iOS, Android, Desktop (self compile)
- App store, Play store (coming soon)
- Desktop
- Windows, MacOS, Linux (self compile)
- Run
index.html
on any modern #browser
- Decentralized
- Secure
- No Cookies
- P2P E2EE encrypted
- Forward secrecy
- No registration
- No installing
- Messaging
- Group Messaging (coming soon)
- Text Messaging
- Multimedia Messaging
- Screensharing (on desktop browsers)
- Offline Messaging (in research phase)
- File Transfer
- Video Calls
- Data Ownership
- SelfHosted
- GitHub pages Hosting
- Local-only storage
For more information on “how it works”, check out: https://positive-intentions.com/blog/decentralised-architecture
(Degoogled links to the apps)
- P2P Chat: https://chat.positive-intentions.com/
- P2P File: https://file.positive-intentions.com/
- Encrypted drive storage: https://dim.positive-intentions.com/?path=%2Fstory%2Fusefs--encrypted-demo
More:
For anyone else that was looking for it, this is the link to the threat model: https://positive-intentions.com/docs/research/threat-model/
That said, it seems quite thin on hard details, such as how identities (ie usernames) are managed – eg are they unique? How can users cross-check an online identity to a real person? Fingerprints? QR codes? SHA256 hashes? – and whether they are considered publicly-exchangeable. Plus how users are bootstrapped so they can find each other.
While a threat model is the minimum to even beginning an assessment of anything that utters the word “security”, I do have to ask:
thanks for taking a look.
firstly i would like to apologise for throwing the following blocks of AI text at you. i often used AI to create documentation for the project. im not much of a writer, im sure its more clear from AI than if i did it myself.
the ID’s are cryptographically random to make it reasonably certain that strangers cannot connect (because its an ungussable ephemental string). this is used with peerjs-server (open source and documented) to connect with a predictable ID. when this ID is shared “through some other trusted channel” (e.g. whatsapp, qrcode), the peers connect and establish encryptions keys (see links above). afer the first connection (expected to be secure!), the previously establish encryption keys can be used to authenticate the user (to prevent MITM).
long story short… this is my sideproject and im trying to get it off the ground. as i post more about the project, i decieded to create a website to “document” the project. there are understandable questions like yours, so made sense to answer them in the website. this includes things like the threat-model… while one-shotting is a thing you can do with AI, the threat model took several days of learning, thinking and consideration. i also posted about it on reddit for feedback and updated it accordingly.
am i a cryptographer yet? having worked on this project i must have picked some stuff up. i still find that i need to learn much more.
i hope admitting i used AI doesnt undermine the effort i put in. i try to communicate details in places like lemmy and the code is open source. AI enables me to demonstrate granular functionality that is easier for me to test as well present to professionals; in contrast to presenting overwhelmingly complicated code on github. for example for my cryptography functionality i created a separate repo to try things out for my learning: https://cryptography.positive-intentions.com/?path=%2Fstory%2Fcryptography-introduction--welcome
there are good and bad ways to using AI and i believe im doing it responsibly. i have been a coder for 15+ years. i can do it myself, i simply cant type as fast as AI making it indespensible when working on a project of this scale. i completely understand your concerns and im all ears for advice on a reddit post i asked: https://www.reddit.com/r/CyberSecurityAdvice/comments/1lekrsx/what_advicebestpractices_are_there_for_creating/
(its why like in all my app, website and posts (like this), i try to strike caution.)
Please understand this in the kindest possible way: if you were not willing to write documentation yourself, why should I want to want review it? I too could use an AI/LLM to distill documentation rather than posting this comment but I choose not to, because I believe that open discussion is a central tenant of open-source software. Even if you are not great at writing in technical English, any attempt at all will be more germane to your intentions and objectives than what an LLM generate. You would have had to first describe your intentions and objectives to the LLM anyway. Might as well get real-life practice at writing.
It’s not that AI and LLMs can’t find their way into the software development process, but the question is to what end: using an AI system to give the appearance of a fully-flushed out project when it isn’t, that is deceitful. Using an AI system to learn, develop, and revise the codebase, to the point that you yourself can adequately teach someone else how it works, that is divine.
With that out of the way, we can talk about the high-level merits of your approach.
What is the lifetime of each user’s public/private keypair? What is the lifetime of the symmetric key shared between two communicating users? The former is important because people can and do lose their private key, or have a need to intentionally destroy the key. In such instance, does the browser app explicitly invalidate a key and inform the counterparty? Or do keys silently disappear and also take the message history with it?
The latter is important because the longer a symmetric key is used, the more ciphertext that a malicious actor can store-and-decrypt later in time, possibly in the future when quantum computers can break today’s encryption. More pressing, though, is that a leak of the symmetric key means all prior and future messages are revealed, until the symmetric key is rotated.
I take substantial notice whenever a promise of “true privacy” is made, because it either delivers a very strange definition of privacy, or relies upon the reader to supply their own definition of what privacy means to them. When privacy is on offer, I’m always inclined to ask: privacy from whom? From network taps? From other apps running in the same browser?
This document pays only lip service to some sort of privacy notion, but not in any concrete terms. Instead, it spends a whole section on attempting to solve secure key exchange, but simply boils down to “user validates the hash they received through a secure medium”. If a secure medium existed, then secure key exchange would already be solved. If there isn’t one, using an “a priori” hash of the expected key is still vulnerable to hash attacks.
I applaud you for undertaking an interesting project, but you also have to be aware that many others have also tried their hand at secure messaging, with more fails than successes. The blog posts of Soatok show us the fails within just the basic cryptography, and that doesn’t even get to some of the privacy issues that exist separately. For example, until Signal added support for username, it was mandatory to reveal one’s phone number to bootstrap the user’s identity. That has since been fixed, but they go into detail about why it wasn’t easy to arrive at the present solution.
I recall a recent post I saw on Mastodon, where someone who was implementing a cryptographic library made sure to clarify that they were a “cryptography engineer” and not a cryptographer, because they themselves have to consult with a cryptography regarding how the implementation would work. That is to say, they recognized that although they are writing the code which implements a cryptographic algorithm, the guarantees comes from the algorithm itself, which are understood by and discussed amongst cryptographers. Sometimes nicely, and other times necessarily very bluntly. Those examples come from this blog post.
I myself am definitely not a cryptographer. But I can reference the distilled works of crypgraphers, such as from this 1999 post which still finds relevancy today:
I wish you the very best with this endeavor, but also caution as the space is vast and the pitfalls are manifold.
sorry for the delay in responding. personal matters required more focus and to reply to you i wanted to set aside some time to write well for clarity.
im not entirely bad at writing (technical or otherwise) to get to where i am now in the project, i usually write with my own words like now. the blog articles you see on the website are from old reddit posts. questions like your are understandably frequent and so it made sense to create the website and blog to address FAQ’s. i think its important to note how im using AI here. while i can say to AI “here are some bullet points, now turn it into an article…”, i have written the content and details myself and then have AI reword it for clarity. i think the resulting content is better for clarity.
the implementation sits ontop of a webrtc connections which mandates its own encryption keys. my app adds an additional set of public/private keypair and symmetric keys. these are persisted to browser storage (indexedDB). the keys are cleared if the user performs a logout (its all client-side, so there is no actual “logout”, it clears the local data).
key rotation is a work-in-progress and not testable in the app. while i can have a button that says “rotate keys”, im planning to frame it as something like “block contact”. this is because it makese to keep user ID’s static, so that in future sessions, the app can automatically connect to “known peers”. in the case you want to block someone, it makes sense to abandon that ID so they cannot ping you with it. when you connect to a “know peer” that doesnt know your new ID, it can use the previsously establish keys to verify each other and update the contact details accordingly.
its also possible to export the data to a file to then load from that profile. its currently static and unencrypted. there will be an option to have it all password encrypted. https://www.reddit.com/r/cryptography/comments/1lhjpxk/veracryptlike_functionality_from_a_browser/
completely understandable. as mentioned in the post cybersecurity is full of caveats. here is a previsous attempt to outline some details: https://www.reddit.com/r/cryptography/comments/1evdby4/is_this_a_secure_messaging_app/
im also investigate various approaches to exchanging data offline with QR codes.
(written by me): https://www.reddit.com/r/positive_intentions/comments/1b5j424/file_sharing_by_qr_code/ (written by having AI transcribe my wording): https://positive-intentions.com/blog/qr-codes-as-a data-channel
id also like to investigate other things a browsers can do like exchange encryption data over NFC.
it isnt use-friendly yet, but i also have some basic functionality around p2p broker connections to avoid needing the peerjs-server (which acts as the broker.). some unclear details which could do with AI clarification can be seen here: https://github.com/positive-intentions/chat/issues/6
the existing key exchange should be already secure enough… but users would understandably want to be sure my code doesnt have a critical-bug and validating hashes provides that bit extra.
i have seem some other myself, and i still believe my approach is unique. there are of course limitations in the webapp form-factor, but it also provides a lot of flexibility in just being able to run on a browser. while many try/succeed/fail, this is my attemp. i have been refining my approach with feedback and there is still much to do. at this point i dont consider it insecure, but the UI is pretty ugly and combined with various UI bugs, is deterring users. with the code being course source, i often try to present some concepts in a more digestable way with code examples as seen:
there is a lot to learn but by breaking things into small parts, i can better learn how it can all fit together.
i like that term. its new to me. i normally just call myself a webdeveloper to clarify my expertise. its more so the case than a cryptography engineer. i open sourcemy work for transparency, but also great for my own learning.
thanks for the good wishes. hopefully i get to a stage where its better presented as a product and not just a proof-of-concept.