

While I agree that P2P is the next best thing and torrents are pretty awesome, they are unicast and ultimately they waste far more resources, especially intercontinental bandwidth than multicast would.
Tell me if I understand the use case correctly here. I want to livestream to my 1000 viewers but don’t want to go through CDNs and gatekeepers like Twitch. I want to do it from my phone, as I am entitled to by the spirit of free internet and democratization of information, but I obviously do not have enough bandwidth for 1000 unicast video streams. If only I had ability to use multicast, I could send a single video stream with multicast up my cellular connection, and at each internet backbone router it would get duplicated and split as many times as necessary to reach all my 1000 subscribers. My 100 viewers in Japan are served by a single stream in the trans-Pacific backbone that gets split once it touches land, is that all correct?
In that case, torrent/peertube-like technology gets you almost all of the way there! As long as my upload ratio is greater than 1 (say I push the bandwidth equivalent of TWO video streams up my cellular), and each of my two initial viewers (using their own phones or tablets or whatever devices that can communicate with each other equally well across the global internet without any SERVERS, CDNS, or MIDDLEMEN in between, using IPv6 as God intended) pushes it to two more, and so on, then within 10 hops and 1 second of latency, all 1000 of my viewers can see my stream. Within 2 seconds, a million could see me in theory, with zero additional bandwidth required on my part, right? In terms of global bandwidth resource usage, we are already within a factor of two of the ideal case of working multicast!
It is true that my 100 peertube subscribers in Japan could be triggering my video stream to be sent through the intercontinental pipe multiple times (and even back again!), but this is only so because the peertube protocol is not yet geographic-aware! (Or maybe it already is?) Have you considered adding geographic awareness to peertube instead? Then only one viewer in Japan will receive my stream, and then pyramid-share it with all the other Japanese.
P2P, IPv6, and geographic awareness is something that you can pursue right now, and it gets you within better than a factor of 2 of the ideal multicast dream! Is factor of 2 an acceptable rate of waste of resource usage? And you can implement it all on your own, without requiring every single internet backbone provider and ISP to cooperate with you and upgrade their router hardware to support multicast. AND you get all the other features of peertube, like say being able to watch a video that is NOT a livestream. Or being able to read a comment that was posted when your device was powered off.
Also, I am intrigued by the great concern you give for intercontinental bandwidth usage, considering those pipes are owned by the same types of big for-profit companies as the walled-garden social networks and CDNs that are so distasteful. From the other end, the reason why geographic awareness has not already been implemented in bittorrent and most other P2P protocols is precisely because bandwidth has been so plentiful. I can easily go to any website in Japan, play video games with the Chinese, or upload Linux images to the Europeans, without worrying about all the peering arrangements in between. If you are Netflix you have to deal with it and pay for peerage and build out local CDN boxes, but as a P2P user I’ve never had to think about it. Maybe if 1-to-millions torrent-based server-less livestreaming from your phone were to become popular, the intercontinental pipe owners might start complaining, but for now the internet just works.
Yes, I’m using “geographic awareness” here as shorthand for the same algorithm that BGP uses to calculate shortest route. As far as I know, BGP has no knowledge of “countries” or “continents”, it makes decisions purely on local policy and connectivity info available to it. However, the resulting topology map does greatly resemble the corresponding geographic map, a natural consequence of the internet being a physical engineering structure. I’m not sure how publicly available the global BGP data is. If you were designing a backbone-bandwidth-preserving P2P app you would either give it BGP data directly, or if that’s not available, give it the world map to get most of the same benefit.
The multicast proposal would need to be routed through the very same ISP-obscured topology, so there is no advantage over topology-aware P2P.
As a graph problem, it does look to me within factor of 2 is practical.
First consider a hypothetical topology-aware “daisy chain” scheme, where every swarm user has upload ratio of exactly one. Then every backbone and last-mile connection gets used exactly twice. This is why I say factor of 2 is the upper limit. It’s like a maze problem where you can navigate an entire maze and only traverse each corridor twice. Then look at the more practical “pyramid” scheme where half the users have upload ratio of about 2. Some links get used twice but many get used only once! UK-UK1 link is the only one to be used 3 times. Notably observe that US-JP and US-UK transcontinental links only get used once, as you wanted! Overall this pyramid scheme looks to me to be within 20% efficiency of the optimal multicast scheme.
What do you think backbone routers are? They are computers! Specialized for a particular task, but computers nonetheless. Owned by someone other than you. Your whole lament is that you can’t force those owners to implement multicast on their routers. I think using the royal “our” computer, something we can do right now without forcing anyone else, is much better by comparison. If you insist that P2P swarm members, they who actually want to see your livestream, are not good enough, that you only want to use “your” computer to broadcast and no one else’s, then you are left with no options other than bouncing HAM video signals off the ionosphere. And even the radio spectrum is claimed by governments.
I think you underestimate the size. Imagine if multicast were ubiquitous, billions of internet-connected users each with dozens? hundreds? of multicast subscriptions. Each video content creator is a multicast, each blogpost you follow, each multi-twitter handle, each lemmy community you subscribe to. Hundreds easily. Thats many gigabytes, possibly hundreds of gigabytes, of state to fit into every router. BGP is simple because you care only about the physical links you actually have. You can stuff entire IP ranges into a single routing table entry. Your entire table could be a dozen entries. Fits inside the silicon. With multicast I don’t think you can fold it in, you must keep the entire many-to-many table on every single router[1]. And consult the 100GB table to route every single packet, in case it needs to get split. As you said, impossible with 1990s technology, probably possible but contrary to business goals in 2020.
You are concerned about the battery life of your phone when you use the bandwidth of 2 video streams compared to watching just 1? Yet you expect every single router owner to plug in hundreds of gigabytes of extra RAM sticks and spend extra CPU power and electricity to look up routing tables to handle your multicast traffic for you. You are just offloading the resource usage onto other people’s computers! Not “our” computers - “theirs”. Remember how much criticism Bitcoin got for wasting resources? Not the proof of work, but the having to store a duplicate copy of 100GB’s of transactions blockchain on every single node? All that hard drive space wasted! When “Mastercard” and “Visa” can do it with only a single database on a mainframe. Yet now you want “them” to do the same and “waste” 100GB’s of RAM on every single router just so your battery life is a little better.
This does not follow. Didn’t you say that multicast was already sabotaged by the very same cablo-distribution networks to maintain their send-monopoly? You expect to force the ISPs to turn multicast back on and somehow have it fly under the radar, but P2P would get the screws turned? It can’t be one and not the other! If you plan to have the governments force the ISPs to fall in line and implement multicast standards, then why couldn’t you have the same governments (driven by democratic pressure of billions of internet users demanding freedom, presumably) enshrine P2P rights? Again, remember that P2P is something we already have, something that already works and can be expanded with no additional cooperation from other players. Multicast is something that would need to be forced on others, on everyone, and require physical hardware updates. If there are future restrictions on P2P, they would be easier to defend against politically and technologically. If you cannot defend P2P, then you for sure do not have enough political power to force multicast.
[1]: Thinking about this, maybe you could roll it in a little. Given N internet users (~a billion), each with S subscriptions (say a hundred), C number of content feeds (a hundred million? 10% of users are also creators, 90% are pure consumers), and each router has P physical links (say ten), then instead of N*S amount of state (100GB’s), each router could fold it down into C*P amount of state (1GB’s). As in “If I receive a multicast packet from [source ip=US.5.6.7] to [destination ip=anyone], route copies of it out through phy04, phy07, and phy12”. You would still need a mechanism to propagate table changes pretty rapidly (full refresh about once every minute?). Your phone can be switching cells or powering on and off. You don’t want to multicast packets to a powered-off IP - that would be waste of resources!
And how do you detect oversubscribing? If a million watchers subcribe to 1 multicast livestream - it’s fine, but what happens when 1 troll subscribes to a million livestreams? If I subscribe to 1 million video streams, obviously my last-mile connection cannot fit them all. With TCP unicast, the senders would not receive TCP ACK replies from me and throttle down. But with multicast, the routers in between do not know about my last mile, or even if my phone is still powered on since later than a minute ago. All they know is “if receive multicast from IP1, send to phy04; if receive multicast from IP2, send to phy04;” etc. Would my upstream routers not get saturated trying to send a million video streams to a dead IP? Would we need to implement some sort of a reverse-multicast version of “TCP ACK”?
1 ↩︎