• Natanael@infosec.pub
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      2 days ago

      In fact, it is worse than the storage requirements, because the message delivery requirements become quadratic at the scale of full decentralization: to send a message to one user is to send a message to all. Rather than writing one letter, a copy of that letter must be made and delivered to every person on earth

      That’s written assuming the edge case of EVERYBODY running a full relay and appview, and that’s not per-node scaling cost but global scaling cost.

      Because they don’t scale like that, global cost is geometric instead (for every full relay and appview, there’s one full copy with linear scaling to network activity), and each server only handles the cost for serving their own users’ activity (plus firehose/jetstream subscription & filtering for those who need it)

      For Mastodon instance costs, try ask the former maintainers of https://botsin.space/

      • sem@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        1
        ·
        15 hours ago

        I’m sad that bots in space had to spin down, but there are still bots on Mastodon. One server quitting didn’t take everything down.

        The part where if a mastodon post gets popular, it has to serve that to everyone makes sense because it’s kind of like a website. Maybe there could be a CDN like Cloudflare that a mastodon server could use to cache responses?

        The part about Bluesky that doesn’t sound good to me is “to send a message to one user is to send it to all”. Wouldn’t this be crazy with even 100 servers for 10000 users, vs 2 servers with 5000 each? Not sure how the math works but it doesn’t look good if they have to duplicate so much traffic.