flop_leash_973, (edited )

Would be more exciting and worth paying attention to if Google Fiber wasn’t basically living in an iron lung over at Alphabet these days since they halted major expansion.

DoucheBagMcSwag,

Wait is fiber rollout back? …

b0gl,

I’ll never understand how you guys in the US are fine with having bandwidth limits on your broadband connections. I’d be pissed. I even have unlimited on my phone. Like wth?

enthusiasticamoeba,

What makes you think people are fine with it? ISPs have monopolies over service areas and can do whatever the fuck they want. They have monopolies because of corporate lobbying. No amount of voting gets these corrupt fucks out of office bc votes literally do not matter and there’s only two parties, they’re both to the right of center, and they’re both bought and sold. Just to really make sure, we’re all taught from birth that the US is peak civilization and all other countries are backwater shitholes.

merlinf,

Where in the world do you not have bandwidth limits? If there were no bandwidth limits I could just DOS my entire ISP by downloading petabytes between two of my own computers.

Meltrax,

I think you are mistaking bandwidth limits with data caps?

At some point all devices have a bandwidth limit. Even if you somehow had a 10gb/sec phone data connection (which is absolutely not possible) your phone device literally cannot transfer data that fast.

Byter,

If you’re struggling to think of a use-case, consider the internet-based services that are commonplace now that weren’t created until infrastructure advanced to the point they were possible, if not “obvious” in retrospect.

  • multimedia websites
  • real-time gaming
  • buffered audio – and later video – streaming
  • real-time video calling (now even wirelessly, like Star Trek!)
  • nearly every office worker suddenly working remotely at the same time

My personal hope is that abundant, bidirectional bandwidth and IPv6 adoption, along with cheap SBC appliances and free software like Nextcloud, will usher in an era where the average Joe can feel comfortable self-hosting their family’s digital content, knowing they can access it from anywhere in the world and that it’s safely backed up at each member’s home server.

bamboo,

I doubt a home server centered around software like nextcloud would ever become commonplace. I think a more probable solution involves integrating new use cases with devices people already have, or at least familiar form factors. For example, streaming from your smart TV device (chromecast, Roku, Apple TV, the actual TV itself) instead of from the cloud, or file sync using one of these devices as an always-on server. But, in both of these cases, there is in inherit benefit from using a centralized cloud operator. What are the odds that you have already downloaded the episode to stream to your TV box, but not your phone if that was where you intended to watch it anyways? And for generic storage, cloud providers replicate that data for you in various locations to ensure higher redundancy and availability than what could be guaranteed simply from a home server or similar device. I presume new use cases will need to be more creative.

MeanEYE,
@MeanEYE@lemmy.world avatar

Also going big bandwidth ahead of the requirement curve means most people won’t use it to its full extent for a while. It’s much easier to implement and maintain such network than one trying to catch up with need.

frezik,

Video calls were all over 1950s futurism articles. These things do get anticipated far ahead of time.

4K Blu-ray discs have a maximum bitrate of 128 Mbps. Most streaming services compress more heavily than that; they’re closer to 30 to 50 Mbps. A 1Gbps feed can easily handle several people streaming 4K video on the same connection provided there’s some quality of service guarantees.

If other tech were there, we could likely stream a fully immersive live VR environment to nearly holodeck-level realism on 1Gbps.

IPv6 is the real blocker. As you say, self-hosting is what could really bring bandwidth usage up. I think some kind of distributed system (something like BitTorrent) is more likely than files hosted on one specific server, at least for publicly available files.

Paradox,
@Paradox@lemdro.id avatar

I have 10 gig at home, and powerful enough networking hardware that can take advantage of it (Ubiquiti stuff)

Nothing can ever saturate the line. So it’s great for aggregate, but that’s it

LukeMedia,

It’s not often that I can saturate a 1Gbps line, unless you have a large household I don’t see much point in going over 1Gbps right now. Though I’m sure there are some exceptions.

AA5B,

That’s what I was gonna say: it’s not that i use sufficient bandwidth to really need 1gbps but the line is never even temporarily saturated. Just rock solid

MeanEYE,
@MeanEYE@lemmy.world avatar

Having a connection that’s not even close to saturated (or backbone for that matter) means lower latency in general. But it also means future proofing and timely issues resolution as you catch problems early on.

LukeMedia,

Future proofing an Internet line doesn’t make much sense to me. If a higher speed plan is available, I’d just upgrade my plan if the need arises, save money in the meantime.

frezik,

Flip it around and look from the ISP’s point of view. Once fiber is connected to a house, there are few good reasons to use anything else. Whomever is the first to deploy it wins.

Now look at it from a monopoly ISP’s point of view. You’re providing 100Mbps service on some form of copper wire, and you’re quite comfortable leaving things like that. No reason to invest in new equipment beyond regular maintenance cycles. If some outside company tries to start deploying fiber, and if they start to make inroads, you’re going to have to (gasp) spend hundreds of millions on capital outlays to compete with them. Better to spend a few million making sure the city never allows them in.

MeanEYE,
@MeanEYE@lemmy.world avatar

That too. To ISP it pays off to future-proof to a degree. More to the point, it’s easier to aggregate high bandwidth users since no one will be using full connection speed all the time, it’s simply impossible. So with 100Gbps they can give 25Gbps service to a lot more people than 4. Closer to 40 or so. Good marketing, test and prepare for future at a decent investment now. It’s how things should be.

LukeMedia,

Sorry, I had only thought about from the point of view of a customer future-proofing their Internet plan, not the ISP future proofing. I replied from that viewpoint, as the original comment was from that pov. I didn’t think about it otherwise, but you’re right that ISPs should future proof. That’s not silly.

ours,

Same, I got 10gbit because there was some competition early with fiber getting wider. Now my same provider has slower offers at lower prices but I don’t mind the extra bandwidth in the case I would need it and I have a grandfathered offer so pay the same as 1gbit.

LukeMedia,

Paying the same rate is certainly an instance where it makes since. Plus, you can show off to friends!

onlinepersona,

Man, I’d love to sit on that. Growing up with 56k and living with 100Mb/s now is already a big difference, but it shows when I push and pull docker images or when family accesses the homeserver. 1Gb/s would be better, but probably I’ll somehow use up the bandwidth with a new toy. 10Gb would keep me busy for a long time. 20Gb would allow me try out ridiculous stuff I haven’t thought of yet.

Kyrinar,

I just want an internet provider that isn’t Spectrum or single-digit download speeds. Not having any real choice fucking sucks, especially since Spectrum is horrible.

Had AT&T fiber at my old place and god damn that shit went down one time for an hour the whole 3 and a half years I was there

pdxfed,

Have you looked at mobile broadband from T-Mobile or Verizon? I haven’t tried either personally but I know if I were in a broadband desert or an oligopoly market like most Americans I would definitely give it a try and see how performance is. Prices weren’t great when released, maybe $50+/mo. for home internet, you can get $ 30-40/mo around here from fixed line providers CenturyLink, FiOS/ziply, or comcrap; feel like the mobile Carriers really missed an opportunity at not pricing it cheaper to add a ton of subs or at least get people to try.

LemmyIsFantastic,

I couldn’t care less tbh. Gigabit is more than enough.

frezik,

And we’re still stuck on IPv4. Going to IPv6 would do a lot more than 1Gbps connections would.

ripcord,
@ripcord@kbin.social avatar

And what do you think it would do for you?

frezik,
  • Better routing performance
  • No longer designing protocols that jump through hoops to deal with lack of direct addressing
lud,
  • No longer designing protocols that jump through hoops to deal with lack of direct addressing

Fucking CGNAT…

MeanEYE,
@MeanEYE@lemmy.world avatar

Sorry to be the one to mention, but NAT is here to stay. Even if IPv6 has enough address space for everything to have a public address it’s still good security measure to have local area network that has a firewalled exit node. Especially considering how IoT has become popular and just how little people care about security of same devices.

frezik,

No, stop this. NAT is not a security measure. It was not designed as one, and does not help security at all.

onlinepersona,

Why doesn’t it help security? Is everybody’s device supposed to be publicly accessible?

frezik,

Because hiding addresses does very little. A gateway firewall does not need NAT to protect devices behind it.

In fact, NAT tends to make things more complicated, and complication is the enemy of security. It’s one extra thing that firewalls have to account for. Firewalls behind NAT also don’t know where traffic is originally coming from, meaning they have one less tool at their disposal. This gets even worse with CGNAT, which sometimes has multiple levels of NAT.

Security is a very common objection to getting rid of NAT, and it’s wrong.

MeanEYE,
@MeanEYE@lemmy.world avatar

I still consider it important part of the whole package. It’s not a be all end all solution but hiding your private network from outside world is a good first step. In situation you are describing DHCP would have to sit with ISP then, effectively giving them control over what you get to install at your home or limiting bandwidth of certain devices which is a huge issue. Of course you can do traffic shaping with NAT as well, but then whole connection has to be limited and not individual device. While NAT does complicate things a lot, and I mean a lot, it does provide a level of segregation and control which you can’t have otherwise.

So the choice boils down to either run Proxy/Gateway or NAT and latter is far easier for common user since routers come pre-configured. Or worst case scenario provide public IP to everything and mess around with gateway’s firewall to protect each individual device from outside.

frezik,

IPv6 has DHCP, but it doesn’t work like that. You generally get a prefix and other details about the network, like the gateway address and DNS, and autoconfiguration based on the MAC address does the rest. It was first hoped that DHCP wouldn’t be needed at all for IPv6, but it turned out to be still useful. There’s some more complications here, but suffice it to say that you shouldn’t try to take your knowledge of IPv4 and try to map it on top of IPv6. They’re separate beasts.

A gateway can block incoming traffic to the whole internal network if you want. It doesn’t need NAT to do that.

MeanEYE,
@MeanEYE@lemmy.world avatar

I’ll have to look more into it then. However I still consider hiding your private network to be a good thing, if for no other reason then privacy, even though traffic might be blocked. And I am aware that security through obscurity is not a good form of security, however when added on top of other properly secure methods, it’s an addition, no matter how trivial. As for NAT I do wish it went away as I’ve had nothing but troubles with it. But it did play an important role with IPv4.

onlinepersona,

I’m curious and quite ignorant in networking, so excuse the questions.

How would the house devices communicate with each other?

In my home LAN behind a router and NAT, each device gets an internal IP thanks to DHCP. If I want to make my homeserver media server with DLNA available only internally, there’s nothing I have to do. Just start it up with 0.0.0.0 and it’ll be picked up (if I’m not mistaken by sending a multicast packet to the router). It’s then possible for any smart TV in my home to pick it up, and my phone or computer with VLC don’t need any configuration either.

And if I have a service that should be available to the world, port forwarding does it for me. Should a user want to torrent or use some P2P application, the router can also selectively enable UPnP to open ports for that user’s device. It’s not that complicated.

What is complicated that makes NAT worse for security? How would a gateway firewall improve it? Doesn’t it have to keep track of connections too in order to know what’s going on? For example just because a device (A) establishes a connection with an external one (B), doesn’t mean that another external device © is allowed to use that port to communicate with the the internal device (A).
What else besides address translation falls away if you remove NAT?

frezik, (edited )

For internal communication on IPv4, everything has some unique internal IP. There are blocks reserved for private space. Usually people use 192.168.x.x or 10.x.x.x. DHCP hands it the address.

If you wanted this to work in the IPv6 world, you are assigned a prefix by your ISP, and everything is inside that prefix. Services still have to discover each other by some mechanism. Perhaps by DHCPv6, or perhaps broadcasting their existence.

Port forwarding is only necessary with NAT. If you have a gateway firewall that blocks incoming new connections by default, then you will need to open the port going to a specific device. Current home networking “routers” combine port forwarding and opening the firewall together as a convenience, but there’s no reason an IPv6 world would need to do that. UPnP can open the port the same way if you want that (though that’s a whole other security issue).

In a home networking “router”, the gateway firewall is already combined in. In fact, I’m putting the “router” in quotes because it’s really a firewall with NAT and some other services like DHCP. It doesn’t typically do things like BGP that we would normally see in a router outside of an edge network like your home. A router out there is an allow-by-default device.

Adding NAT to the gateway firewall makes the code more complicated. For example, here’s a command on Linux that activates NAT for the iptables firewall:


<span style="color:#323232;">iptables -t nat -A POSTROUTING -o eth1 -j MASQUERADE
</span>

That “MASQUERADE” bit is handled as NAT, and iptables has to implement more code just to do that.

If we wanted to simply drop all new incoming connections, we would do:


<span style="color:#323232;">iptables -P INPUT DROP
</span><span style="color:#323232;">iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
</span>

Which tells it to drop packets by default that aren’t otherwise accepted, and then accept packets that are already part of a connection. Even with NAT, we typically want to do this, anyway, so we’re not making things any easier with NAT.

If we want to add a service listening on port 80 for host 10.0.0.5, we would do:


<span style="color:#323232;">iptables -A INPUT -p tcp -d 10.0.0.5 --dport 80 -j ACCEPT
</span>

Which works just fine in a NAT-less world. With NAT, we also have to add this:


<span style="color:#323232;">iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 80 -j DNAT --to-destination 10.0.0.5
</span><span style="color:#323232;">iptables -t nat -A POSTROUTING -o eth1 -p tcp --dport 80 -d 10.0.0.1 -j SNAT --to-source 10.0.0.5
</span>

Which translates the stuff coming in from outside to port 80 to 10.0.0.5 on the same port, and then also translates replies going back the other way. And I might be getting some of the commands wrong, because it’s been a while since I’ve had to configure this.

Suffice it to say, dropping NAT greatly simplifies firewall rules. Your home router is still doing all this (many of them are just Linux iptables these days), but it’s hiding the details from you.

Edit: This doesn’t cover how protocols have been designed to work around NAT, and has resulted in a more centralized Internet that’s easier to spy on. That’s a whole other problem that is hidden from most people.

Kazumara,

Ew it’s PON based

motherr,

Why would you care that’s it’s passive (pon: passive optical network)? As I understand it the limitations of passive vs active wouldn’t have any impact on the end-user. It’s not something I know a lot about, though.

Kazumara,

Because PONs are just fundamentally worse. Why would anyone turn fiber of all things into a shared medium. Just lay fibers from the dwelling up to the central office. It’s barely any costlier since the real expense is the digging, not the fiber. And it’s basically guaranteed to scale forever by simply replacing the optics on the ends. That kind of infrastructure can also be leased out to other providers on an individual dwelling granularity. With PONs competitors are forced into reselling bandwidth, at best, or the infrastructure can be monopolised fully.

Squizzy,

As opposed to what? Active? That’s not necessary in local networks

Kazumara,

As opposed to a normal fiber link to the switch in the central office. No oversubscription or shared media.

Squizzy,

I don’t understand how it is shared media through a PON system? What is the name for this alternative I’d like to look into it.

Kazumara,

In a typical PON (GPON, XG-PON, XGS-PON) you have a single fiber from the central office to the optical splitter in the street, from where up to 64 subscribers are connected one fiber each. The bit between central office and splitter is shared. The splitter is passive and just sends 1/64 of the light to each downstream port, in the other direction it combines all the downsteam light towards the upstream port.

The OLT in the central office sends on one wavelength (e.g. 1577 nm) and all subscriber ONTs send on one other common wavelenth (e.g. 1270 nm). In both directions a time division technique is applied. I believe in the downstream the individual time frames are encrypted with different keys in turn, such that only the specific destination ONT can read the content of their specific time frames. In the upstream the ONTs have to make sure only to send in their own slots, as otherwise the OLT would receive superimposed optical signals that couldn’t be read. You can probably see how this could go wrong if a neighbor had malfunctioning equipment.

The alternative doesn’t really have a set of standards like PON, as you can just use whatever optical transceivers you want for each customer individually. Though I guess that for operational reasons an ISP would still standardise the setup for all customers. For example the ISP whose services I subscribe to tells customers to use “Bidir LR, 10 km, TX1310, RX1490-1550 nm”, as 1G, 10G, or 25G, depending on which you order.

To distringuish such a setup from a PON setup I have seen it being called point-to-point (P2P).

prorester,

Why are people doubting this? This opens up massive possibilities for people, especially those who want to start businesses outside of city centers.

You could:

  • host your own home-servers and never be worried about bandwidth
  • get 8k streams and not stutter (a low-end 8k stream requirs 50Mb/s, a family of 4 would need minimum 200 Mb/s just for videos)
  • send 8k streams and not stutter
  • offload most of your data to a datacenter on the other side of the planet and not worry about access speeds
    • boot into a browser or a minimal frontend with a low powered device and mount your home directory
  • offload computing to the cloud (no need for a gaming PC if you can just play them online)

The biggest thing would be 8k streams. 360 8k streams would be even crazier. 360 videos are filmed using 3-6 cameras depending on how much fish-eye you want. True 360 requires at least 6. If each is filmed at 1080p that's ~6k total resolution, but since you're only watching one section of the video at a time, you're really seeing 1080p.

Those "8k 360 videos" up on youtube are a lie! They aren't 6x8k, but most likely 8k / number of cameras. True 360 8k video would be 6x8k cameras.

A single 8k stream at minimum requires ~50Mb/s. Multiply that by 6 and you're at 300Mb/s just for a single 360 8k stream. Family of 4 --> 1.2Gb/s just for everybody to watch that content - and that's the minimum. If you have a higher bit rate and aren't streaming a 30 fps, you can quite easily double or quadruple that. Family of 4 again means 5Gb/s if everybody's watching that kind of content in parallel.

But this is just the beginning. Why stop at "video". These kinds of transfer speeds upon you up to interactive technologies.

It would still not be enough to stream 8k without any compression whatsover to reach lowest latency.

8k = 7680 × 4320 = 33,177,600 pixels. Each pixel can have 3 values: Red Green Blue. Each take 256 (0-255) values, which is 1 byte, which means 3 bytes just for color.
3 * 33,177,600 = 99,532,800 bytes per frame
99,532,800 bytes / 1,024 = 97,200 kilobytes
97,200 kilobytes / 1024 = ~95 megabytes

So 95MB/frame. Let's say you're streaming your screen with no compression at 60Hz or about 60 fps (minimum). That's 60*95MB/s = 5,695GB/s . Multiply that by 8 to get the bits and you're at 45,562Gb/s which is way above 25Gb/s. Hell, you wouldn't even be able to stream uncompressed 4k on that line. 2k would be possible though. I for one would like to see what an uncompressed 2k stream would look like. In the future, you could have your gaming PC at home hooked up to the internet, go anywhere with a 25Gb/s line, plop down a screen, connect it to the internet and control your computer at a distance with minimal lag as if you're right at home.

In conclusion, 25Gb wouldn't allow you to do whatever you like. You could do a lot, but there's still room. We're not at the end of the road yet.

maxprime,

20 gig networking — even just a switch — is so expensive. 10 gig is already out of reach for 99% of the population, even network nerds. We’re just now in the past couple years seeing a standard of motherboards with 2.5gbps rj45. A lot of brand new nvme ssds can’t saturate 25gbps. There are just so many bottlenecks. I’m not saying I wish dearly those didn’t exist, but I know from my experience upgrading to 10 gig just how many there are.

store.ui.com/us/en/pro/…/usw-pro-aggregation

Personally I am more excited for high speed networking for homelabs to come down in price. At this point in my life I don’t feel the need to access my network outside of my house at super high speeds. My 100mbps up is fine for when I’m out of the house, and 10gbps is more than I need when I’m home.

Pretzilla,

Indeed. I’m getting much less than 1/10th of my provisioned 10Gbps for being cheap like that. It’s still plenty fast, though.

10Gbps is great for feeding a building

At this point I just want affordable 2.5Gb gear

maxprime,

Totally. IMO 2.5gbps should be in every new switch and router without any extra price.

Gigabit came out in 1999. No other standard has moved so slow.

onlinepersona,

Wouldn’t they provide you with a 20Gb compatible router? I was curious and cat8 LAN cables support 40Gb/s. They are 3x as expensive as Cat7, but with I’m just a few meters away from the router, so about 10-15€ and that’s the cables done.

Ah… the PCI-e ethernet card is where it gets pricey 😮 250€ for 10Gb card.

Damn…

Although, I’d be future proof for sure. That kind of speed will probably be enough for 20 years or so.

maxprime,

FWIW 10 gig cards can be much cheaper than 250€ as long as you’re willing to use SFP+ (I got a used pair of cards with a 10m optical cable for $90 CAD) but 25gig is where it gets stupid.

Even if they do supply a capable router, you will probably want at least a switch since most ISP supplied routers only have a few ports. Plus, it’s not uncommon for an ISP router to deliver their advertised speed over only one port, even if the router has several. At the end of the day, though, if you’re paying for >gigabit you probably want to set up your own firewall with a fancy router so you can properly configure your network.

Crazy that gigabit Ethernet is 25 years old and still the de facto standard. IMO we should all be able to afford 100gig inside our homes, finding the bottleneck inside our machines, not between them. Alas, 10gig is for the enthusiasts, and anything above that is for the elites.

twotone,

offload computing to the cloud (no need for a gaming PC if you can just play them online)

Unless you can live very close to one of the data centers doing the computing to minimize the number of hops, that just isn’t even remotely doable with modern networking equipment

Google tried it with stadia and gifs like this show why it doesn’t work for most people

prorester,

There are people on the internet with about 2-3 ms of ping. I'm not a network engineer to tell you how that's even possible, but I've seen it. I'm on 15ms to most game servers right now on a copper line.

Google Stadia failed for different reasons. Nvidia Go (or whatever it's called) still exists. Just because I have a shitty copper line doesn't mean fibre will be as shitty.

kogasa,
@kogasa@programming.dev avatar

Yeah, man. Thank God someone is finally thinking about the family of 4 simultaneously watching 8K 120Hz 360 degree streams.

Also,

  • bandwidth isn’t the same as latency. This would not let you remote control “with minimal latency,” it would be exactly the same as it is with say 20Mbps download.
  • lossless and visually lossless compression dramatically reduces the amount of bandwidth required to stream video. Nobody will ever stream uncompressed video, it makes no sense.
  • If you want to know what an uncompressed 2K stream looks like, look at a 2K monitor.
prorester,

Again, just because it isn't being done yet, doesn't mean it won't be. Every time technology progresses, we find new and interesting ways to fill the new space created by it.

Nobody will ever stream uncompressed video, it makes no sense

Nobody thought it would ever make sense stream games over the internet with Nvidia Go (or whatever it's called), but it's being done. Nobody thought it would make sense to turn a browser into a nearly full operating system, but that's about done.

If you want to know what an uncompressed 2K stream looks like, look at a 2K monitor.

Genius, why didn't I think of that. Thanks for pointing that out.

bandwidth isn’t the same as latency

Wow, I had no idea! I bet a 20Gb line won't get under 1s of ping. There's absolutely no way.

wahming,

I’m just doubting Google will actually get it done. They’ve already abandoned fibre expansion once, no reason to think they’ll stick to it this time around.

MeanEYE,
@MeanEYE@lemmy.world avatar

Am thinking that in somewhat near future network boot will become a lot more dominant than it use to be. Infrastructure speeds are becoming sufficient to do somewhat longer boot but at the cost of significantly simpler administration and issue troubleshooting.

o0joshua0o,

I have their 1gbps plan, but I don’t see how I could utilize anything faster.

snooggums,
@snooggums@kbin.social avatar

Things that take seconds now take even fewer seconds!

prorester,

if it's taking seconds, then there's already a problem

snooggums,
@snooggums@kbin.social avatar

120 GB game update files.

ayaya,
@ayaya@lemdro.id avatar

You can hit your data cap in half the time!

snooggums,
@snooggums@kbin.social avatar

I don't think Google fiber has a data cap, but that is second hand from a friend that has it.

ayaya,
@ayaya@lemdro.id avatar

Oops, I actually know that but I got a little lost in the comment chain. I had just read the comment above yours talking about the 2gbps plan, hence the “half the time.” My ISP has also started offering 2gbps but still has a 1TB cap which means it’s possible to hit the cap in just over an hour which is pretty funny.

snooggums,
@snooggums@kbin.social avatar

So one thing that a lot of people overlook is that even with a data cap the higher speeds is still more convenient if you consume the same amount of stuff. It isn't as noticeable now as it was when speeds went up in the kilobyte ranges thought so many people won't even see the difference especially if they don't hit the cap.

That said, caps are bullshit since network congestion are caused by when people use it at the same tike, not because of the total amount per month.

ManosTheHandsOfFate,
@ManosTheHandsOfFate@lemmy.world avatar

My provider recently started offering a 2gbps plan for $30 more a month. I was tempted until I thought about the money I’d need to spend on new equipment to take advantage of it. 1gbps fiber is plenty for now.

IMongoose,

Ya, mine is slow rolling 2gig but it kind of fucked me up because now I want 6E mesh APs and it’s going to cost me like $500. I know I don’t need it, but the fact that I could have it is tempting. Plus I need 6E for the VR headset I also don’t have.

billygoat,

Tbf, a lot of these multi gig plans are geared to families, where more than one person could be doing high bandwidth activities. Or even just one person doing high bandwidth things doesn’t cause the other persons zoom call to stutter.

That being said, ain’t no one NEED 20gbits but by god I would enjoy it.

diomnep,

Thing is though, most consumer networking gear is capable of a maximum of 1gbit, so to even take advantage of 2gig or 2.5gig you at least need a router with a 2.5gig uplink. If you have this you can have a couple of people on the network using a gig each.

My setup is a 1.2g cable connection going into a 2.5g port on my router, with a couple of servers connected to the router over 10g. This basically lets me download off of my servers at the full speed of the network but the rest of my devices are limited to 1gig.

Going up to 20gig would require a large investment to see the benefits. First you would need a router with a 25g uplink port, which is really only going to be found on a specific tier of “enterprise” gear. These routers aren’t going to have a bunch of ports so you are going to need to dump the output either to a 25g switch or a couple of 10g switches (probably the most cost-effective option). From there you can distribute out to 20 machines at 1g.

Anyway, you are definitely right about the aim of a service like this but to see the benefits of a 20g connection would require some very expensive and specialized equipment.

tony,

I’ve yet to see a remote website that’ll send me 1gbps continuously except a speed test… and whilst it’s nice to see big numbers on those, it isn’t really justifying the cost.

Even things like microsoft and steam stuff throttle far lower than that (presumably because they don’t want a million people trying to hit them for 1gbps constantly).

Once my minimum term is up on this link I can get a 1.6Gpbs one, but probably won’t bother.

Tandybaum,

I’m all about thinking ahead but this seems insane. Really struggling to think of a home use need this these speeds.

I run a relatively small server for family and friends and I haven’t moved to 2gig plan because even that seems like overkill.

MeanEYE,
@MeanEYE@lemmy.world avatar

No one needs these speeds unless you have home office and even then it’s a stretch. For residential buildings it might make sense, but USA doesn’t have those or at least not as many. However it’s far easier to iron out the kinks and issues with early adoption and aggregation is a breeze then.

Tandybaum,

I’m all for insane early adopters to iron out kinks I’d stuff like this. I’m sure we’ll need these speeds at some point but I can’t imagine the average people will in the 15-20 years.

I’d say this is more bandwidth then my entire road would need in total.

Jah348,

This is still a thing? I thought they crushed it like 10 years ago

SinningStromgald,

No, they severely underestimated how hard it would be to overcome the telcos and their lobbying.

CynicRaven,

They may mean crushed it like Google killed Fiber. :D

Jaysyn, (edited )
@Jaysyn@kbin.social avatar

I was involved in one of these Google fiber roll outs several years ago, Google simply doesn't know what the fuck they want or what they are doing as far as installing outside plant goes.

EDIT: To clarify, they simultaneously had no fucking clue what they were doing & also wanted to micromanage all of their contractors.

joekar1990,

Google really doesn’t know what it wants in general besides more profit. Like the killed by google is impressive.

Vilian,

how long until google kill it?

Ghostalmedia,
@Ghostalmedia@lemmy.world avatar

Fiber infrastructure? More likely they’d sell it if they wanted out.

SnipingNinja,

That’s what they’re counting as killing based on the killed by Google website

jollyrogue,

I thought they already did, so this is unexpected.

RojoSanIchiban,

Maybe they should be expanding their physical network first. I waited seven years after they supposedly came to my hometown, and their coverage area barely moved. Most of that is absolutely the fault of AT&T and Comcast stonewalling pole installations but they have the money to put up their own damn poles made of gold after that 77 billion profit report.

Now I moved elsewhere after covid and of course the only two real options still suck uncontrollably with no hope of any other big mover creating actual competition.

ArtificialLink,

Google fiber has been supposed to be coming to the west side of Atlanta for like 10 plus years. Hasnt an expanded at all . Yet they still keep that message coming soon to your neighborhood up. And somehow where I am only one option available. Fucking shitty Comcast

tburkhol,

There’s vaults labeled “GFBR” 200 yards from my house on the east side, and it’s still “coming soon.” Meanwhile, AT&T is out here digging every 2 years.

ArtificialLink,

At&t offered my 5mbps lmao. Idk what they are digging for

tburkhol,

IKR? The last time digsafe came out and marked, there were 3 separate AT&T lines twisting around each other like spaghetti, all going the same way and within 3 feet of each other. Like, you’ve already got conduit buried, just blow another fiber through it. Maybe some exec’s kid runs a horizontal drilling company.

seaQueue,
@seaQueue@lemmy.world avatar

Probably putting in VDSL to cash in on federal “high speed Internet” grants.

Schemata,

It’s so frustrating, I worked with a group that had their own community broadband council just to get broadband more wide spread in their county.

Those grants are ridiculous and on objection from another fed department about their grants creating a conflict or another coop claiming they are already offering can derail a whole application. Applications that are not easy or cheap to produce either.

Makes me sick to my stomach

originalucifer,
@originalucifer@moist.catsweat.com avatar

i am also incredibly disappointed in their lack of achievement here. they have a metric shit-tonne of liquid cash, lawyers and tech out the butthole.. but no.. were back to ma' bell still coagulating ala T2.

so much for being different

Dark_Arc,
@Dark_Arc@social.packetloss.gg avatar

I suspect lawyers are stonewalling expansion for fear of making their monopoly cases worse

ultratiem,
@ultratiem@lemmy.ca avatar

Dude I feel bad you’re relying on Google of all people to save you 😬

originalucifer,
@originalucifer@moist.catsweat.com avatar

you should feel bad for everyone in the u.s. that have to suffer the government(s) that allow this bullshit to even be a problem.

LukeMedia,

You could really change US to North America here

fne8w2ah,

Something something ISPs forcing municipalities to create service monopolies?

RojoSanIchiban,

Yep, somethingsomethingsomething regulatory capture.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • uselessserver093
  • Food
  • aaaaaaacccccccce
  • [email protected]
  • test
  • CafeMeta
  • testmag
  • MUD
  • RhythmGameZone
  • RSS
  • dabs
  • Socialism
  • KbinCafe
  • TheResearchGuardian
  • oklahoma
  • feritale
  • SuperSentai
  • KamenRider
  • All magazines