Direct Sockets API in Chrome 131
198 points by michaelkrem 6 days ago | 158 comments
  • chocolatkey 6 days ago |
    When reading https://github.com/WICG/direct-sockets/blob/main/docs%2Fexpl..., it's noted this is part of the "isolated web apps" proposal: https://github.com/WICG/isolated-web-apps/blob/main/README.m... , which is important context because the obvious reaction to this is the security nightmare
    • phildenhoff 5 days ago |
      Interesting — the Firefox team’s response was very negative, but didn’t (in my reading) address use of the API as being part of an otherwise essentially trusted app (as opposed to being an API available to any website).

      In reading their comments, I also felt the API was a bad idea. Especially when technology like Electron or Tauri exist, which can do those TCP or UDP connections. But IWA serves to displace Electron, I guess

      • nzoschke 5 days ago |
        I'm hacking on a Tauri web app that needs to bridge to talking UDP protocols literally as we speak.

        While Tauri seems better than ever for cross platform native apps, it's still a huge step to take to allow my web app access to lower level. Rust toolchain, Tauri plugins, sidecar processes, code gen, JSON RPC, all to let my web app talk to my network.

        Seems great that Chrome continues to bundle these pieces into the browser engine itself.

        Direct sockets plus WASM could eat a lot of software...

        • 1oooqooq 5 days ago |
          with so many multiplatform gui toolkits today, tauri and electron are really bad choices
          • montymintypie 5 days ago |
            What's your recommendation? I've tried so many multiplatform toolkits (including GTK, Qt, wxWidgets, Iced, egui, imgui, and investigated slint and sciter) and nothing has come close to the speed of dev and small final app size of something like Tauri+Svelte.
            • nzoschke 5 days ago |
              I've also tried Flutter, React Native, Kotlin multiplatform, Wails.

              I'm landing on Svelte and Tauri too.

              The other alternative I dabble with is using the Android Studio, XCode to write my own WebView wrappers.

              • bpfrh 5 days ago |
                What did you dislike about kotlin multiplattform?
            • 1oooqooq 5 days ago |
              of course dev speed will be better with tauri plus the literal ton of JavaScript transpilers we use today.

              but for us an inhouse egui pile of helpers allow for fast applications that are closer to native speeds. and flutter for mobile (using neither Cupertino or material)

              • montymintypie 5 days ago |
                Glad to hear that egui is working for you, but in my experience it's not accessible, difficult to render accurate text (including emoji and colours), very frustrating to extend inbuilt widgets, and quite verbose. One of my most recent experiences was making a fairly complex app at work in egui, then migrating to tauri because it was such a slog.
                • api 5 days ago |
                  The web stack is now the desktop UI stack. I think the horse has left the barn.

                  It’s not great but there’s just no momentum or resources anywhere to work on native anymore outside platform specific libraries. Few people want to build an app that can only ever run on Mac or Windows.

          • cageface 5 days ago |
            The cross platform desktop gui toolkits all have some very big downsides and tend to result in bad looking UIs too.
            • rubymamis 5 days ago |
              I've built my app[1] using Qt (C++ and QML), and I think the UI looks decent. There's still a long way for it to feel truly native, but I've got some cool ideas.

              [1] https://get-notes.com/

              • rty32 5 days ago |
                You are probably not solving the same problems many other people are facing.

                Many such applications are accessible on the web, often with the exact UI. They may even have a mobile/iPad version. They may be big enough that they have a design system that needs to be applied to in every UI (including company website). Building C++ code on all platforms and running all the tests may be too expensive. The list goes on.

                • rubymamis 5 days ago |
                  I just started prototyping a mobile version of my app (which shares the code as my desktop app) and the result looks promising (still work-in-progress tho).

                  Offering a web app is indeed not trivial. Maybe Qt WebAssembly will be a viable option if I can optimize the binary and users wouldn't mind first long load time (and then the app should be cached for instant load). Or maybe I could build a read-only web app using web technology.

                  Currently, my focus is building a good native application, and I think most of my users care about that. But in the future, I can see how a web app could be useful for more users. One thing I would like to built is a web browser that could load both QML and HTML files (using regular web engine), so I could simply deploy my app by serving my QML files without the binary over the internet.

              • cageface 5 days ago |
                That's definitely one of the best looking Qt apps I've seen.
                • rubymamis 5 days ago |
                  Thank you! I think Qt is absolutely great. One need to put a little effort to make it look and behave nicely. I wrote a blog post about it[1], if you're interested.

                  [1] https://rubymamistvalove.com/block-editor

      • chrismorgan 5 days ago |
        > but didn’t (in my reading) address use of the API as being part of an otherwise essentially trusted app

        That’s what the Narrower Applicability section is about <https://github.com/mozilla/standards-positions/issues/431#is...>. It exposes new vulnerabilities because of IP address reuse across networks, and DNS rebinding.

        • mmis1000 5 days ago |
          - It is possible, if not likely, that an attacker will control name resolution for a chosen name. This allows them to provide an IP address (or a redirect that uses CNAME or similar) that could enable request forgery.

          This is quite trival, not even possible though. DNS server is quite a simple protocol. Writing a dns that reflect every request from aaa-bbb-ccc-ddd.domain.test to ip aaa.bbb.ccc.ddd won't take you even for a day. And in fact this already existed in the wild.

    • crote 5 days ago |
      That doesn't really make it any better, if you ask me.

      The entire Isolated Web Apps proposal is a massive breakdown of the well-established boundaries provided by browsers. Every user understands two things about the internet: 1) check the URL before entering any sensitive data, and 2) don't run random stuff you download. The latter is heavily enforced by both Chrome and Windows complaining quite a bit if you're trying to run downloaded executables - especially unsigned ones. If you follow those two basic things, websites cannot hurt your machine.

      IWA seems to be turning this upside-down. Chrome is essentially completely bypassing all protections the OS has added, and allowing Magically Flagged Websites to do all sorts of dangerous stuff on your computer. No matter what kind of UX they provide, it is going to be nigh-on impossible to explain to people that websites are now suddenly able to do serious harm to your local network.

      Browsers should not be involved in this. They are intended to run untrusted code. No browser should be allowed to randomly start executing third-party code as if it is trustworthy, that's not what browsers are for. It's like the FDA suddenly allowing rat poison into food products - provided you inform consumers by adding it to the ingredients list of course.

      • apitman 5 days ago |
        > Every user understands two things about the internet: 1) check the URL before entering any sensitive data, and 2) don't run random stuff you download

        I think you're severely overestimating the things every user knows.

      • girvo 5 days ago |
        Unfortunately this is the future. Handing the world wide webs future to Google was a mistake, and the only remedy is likely to come from an (unlikely) antitrust breakup or divestment.
        • bloomingkales 5 days ago |
          I doubt websites as we know it will be what we’ll be dealing with going forward anyways.

          What is a browser if we just digest all the HTML and spit out clean text in the long run?

          We handed over something of some value I guess, once upon a time.

        • rad_gruchalski 5 days ago |
          > Handing the world wide webs future to Google

          Nobody handed anything to anyone. They go with the flow. The flow is driven by people who use their products. The browser is how Google delivers their products so it’s kinda difficult to blame them for trying to push the envelope but there are alternatives to Chrome.

          • troupo 5 days ago |
            > They go with the flow.

            The ancient history of just 10-15 years ago shows Google aggressively marketing Chrome across all of its not inconsiderable properties like search and Youtube, and sabotaging other browsers while they were at it: https://archive.is/2019.04.15-165942/https://twitter.com/joh...

            • rad_gruchalski 5 days ago |
              Indeed. There was time I myself used it as my primary browser and recommended it to everyone around. That changed when they started insisting on signing into the account to „make the most out of it” so I went back to Firefox. Since then I stopped caring. I know, virtue signalling. My point is: nobody handed anything over to Google. At the time alternatives sucked so they won the market. But today we have great alternatives.
          • pjmlp 2 days ago |
            And some developers shipping Chrome alongside their apps, instead of learning proper Web development.
      • derefr 5 days ago |
        Does it help to think of it less as Chrome allowing websites to do XYZ, and more as a PWA API for offering to install full-fat browser-wrapper OS apps (like the Electron kind) — where these apps just so happen to “borrow” the runtime of the browser they were installed with, rather than shipping with (and thus having to update) their own?
      • rad_gruchalski 5 days ago |
        The last time I used Chrome was about 3 years ago. You have a choice.
        • eitland 5 days ago |
          Something always breaks my streak, but since last year or so I feel I am down to twice a year or something.
        • pseudosavant 5 days ago |
          Only kind of. If you are on Mac you can use Safari. On Windows your options are Firefox or other versions of Chrome (Edge, Opera, Brave, etc), and Firefox will not work right enough, and it'll drive you to a version of Chrome.
      • mschuster91 5 days ago |
        > If you follow those two basic things, websites cannot hurt your machine.

        Oh yes they can. Quite a bunch of "helper" apps - printer drivers are a bit notorious IME - open up local HTTP servers, and not all of them enforce CORS properly. Add some RCE or privilege escalation vulnerability in that helper app and you got yourself an 0wn-from-the-browser exploit chain.

        • BenjiWiebe 5 days ago |
          How often does that actually happen?
    • rty32 5 days ago |
      Have isolated web apps/web bundle gained any traction over the past few years? I just realized that this thing existed and there were some discussions around it -- I almost completely forgot this.

      I did a search, and most stuff come from a few years ago.

      • meiraleal 5 days ago |
        It is used by chromeOS
        • rty32 5 days ago |
          You means apps written by Google as "native apps"?

          Any use cases outside that?

          If not, it is probably fair to say nobody uses this.

          • meiraleal 4 days ago |
            A PWA is a IWA so lots of people are using it besides Google
      • angra_mainyu 4 days ago |
        It makes much more sense to bundle a binary + web extension (w/ native messaging) to handle bridging the browser isolation in a sensible manner.

        It's a minimal amount of extra work and would mean you cross browser isolation in a very controlled manner.

  • modeless 5 days ago |
    I think a lot of people don't realize it's possible to use UDP in browsers today with WebRTC DataChannel. I have a demo of multiplayer Quake III using peer-to-peer UDP here: https://thelongestyard.link/

    Direct sockets will have their uses for compatibility with existing applications, but it's possible to do almost any kind of networking you want on the web if you control both sides of the connection.

    • mhitza 5 days ago |
      Longest Yard is my favorite Q3 map, but for some reason I cannot use my mouse (?) in your version of the Quake 3 demo.
      • modeless 5 days ago |
        Interesting, what browser and OS?
        • mhitza 5 days ago |
          Brave browser (Chromium via Flatpak) on the Steam Deck (Arch Linux) in Desktop mode with bluetooth connected mouse/keyboard.
          • topspin 5 days ago |
            Same browser on win10. Mouse works after you click in the window and it goes full screen. However, it hangs after a few seconds of game play.

            Stopped hanging... then input locks up somehow.

            Switched to chrome on win10, same issue: input locks up after a bit.

            • modeless 5 days ago |
              Yeah that issue I have seen, but unfortunately haven't been able to debug yet as it isn't very reproducible and usually stops happening under a debugger.
              • topspin 5 days ago |
                Even with the problems, just the few seconds of playing before the crash+input hang got me hooked. So, off to GOG to get q3a for $15. Also, quake3e with all the quality, widescreen, aspect ratio and FPS tweaks... chatgpt 4o seems to know everything there is to know about quake3e, for some reason.

                Talk about getting nerd sniped.

          • modeless 5 days ago |
            Hmm, I bet the problem is my code expects touch events instead of mouse events when a touchscreen is present. Unfortunately I don't have a computer with both touchscreen and mouse here to test with so I didn't test that case. I did implement both gamepad and touch controls, so you could try them to see if they work.
        • mhitza 5 days ago |
          Works in Firefox, on the same system.
        • nmfisher 5 days ago |
          I can't use mouse either, macos/Chrome. Otherwise, cool!
    • winrid 5 days ago |
      Runs smoother than the Android home screen. :)
    • nightowl_games 5 days ago |
      Yeah we use WebRTC for our games built on a fork of Godot 3.

      https://gooberdash.winterpixel.io/

      tbh the WebRTC performance is basically the same network performance as websockets and was way more complicated to implement. Maybe the webrtc perf is better in other parts of the world or something...

      • modeless 5 days ago |
        Yeah WebRTC is a bear to implement for sure. Very poorly designed API. It can definitely provide significant performance improvements over web sockets, but only when configured correctly (unordered/unreliable mode) and not in every case (peer-to-peer is an afterthought in the modern internet).
        • nightowl_games 5 days ago |
          We got it in unreliable/unordered and it still barely moves the needle on network perf over websockets from what we see in north america connecting to another server in north america
          • modeless 5 days ago |
            I wouldn't expect a big improvement in average performance but the long tail of high latency cases should be improved by avoiding head-of-line blocking. Also peer-to-peer should be an improvement over client-server-client in some situations. Not for battle royale though I guess.

            Edit: Very cool game! I love instant loading web games and yours seems very polished and fun to play. Has the web version been profitable, or is most of your revenue from the app stores? I wish I better understood the reasons web games (reportedly) struggle to monetize.

            • nightowl_games 4 days ago |
              Thanks! The web versions of both of our mobile/web games do about the same as the IAP versions. We dont have ads in the mobile versions, so the ad revenue is reasonble. We're actually leaning more into smaller web games as a result of that. Profit on this game specifically I think it deserves better. I think Goober Dash is a great game, but it's not crushing it like I'd hoped.
        • windows2020 5 days ago |
          I would say WebRTC is both a must and only worth it if you need UDP, such as in the case of real-time video.
      • saurik 5 days ago |
        I mean, the only cases where UDP vs. TCP are going to matter are 1) if you experience packet loss (and maybe you aren't for whatever reason) and 2) if you are willing to actively try to shove other protocols around and not have a congestion controller (and WebRTC definitely has a congestion controller, with the default in most implementations being an algorithm about as good as a low-quality TCP stack).
        • modeless 5 days ago |
          Out-of-order delivery is another case where UDP provides a benefit.
    • dboreham 5 days ago |
      WebRTC depends on some message transport (using http) existing first between peers before the data channel can be established . That's far from equivalent capability to direct sockets.
      • lifthrasiir 5 days ago |
        Not only that, but DTLS is mandated for any UDP connections.
        • modeless 5 days ago |
          Is that a problem? Again, I'm talking about the scenario where you control both sides of the connection, not where you're trying to use UDP to communicate with a third party service.
          • lifthrasiir 5 days ago |
            I think all three comments including mine are essentially saying the same but in different viewpoints.
      • modeless 5 days ago |
        Yes, you do need a connection establishment server, but in most cases traffic can flow directly between peers after connection establishment. The reality of the modern internet is even with native sockets many if not most peers will not be able to establish a direct peer-to-peer connection without the involvement of a connection establishment server anyway due to firewalls, NAT, etc. So it's not as big of a downgrade as you might think.
        • huggingmouth 5 days ago |
          That changed (ahm.. will change) with ipv6. I was surprised to see that I can reach residential ipv6 lan hosts directly from the server. No firewalls, no nat. This remains true even with abusive isps that only give out /64 blocks.

          That said, I agree that peer to peer will never be seemless thanks mostly to said abusive isps.

          • theamk 5 days ago |
            I sure hope not, this will bring in a new era for internet worms.

            If some ISPs are not currently firewalling all incoming IPv6 connections, it's a major security risk. I hope some security researcher raises boise about that soon, and the firewalls will go closed by default.

            • 1oooqooq 5 days ago |
              it kinda of already begun
              • modeless 5 days ago |
                Has there been a big ipv6 worm? I thought that the defense against worms was that scanning the address space was impractical due to the large size.
                • 1oooqooq 5 days ago |
                  i don't think they scan the entire space. but even before that there were ones abusing bonjour/upnp which is what chrome will bring back with this feature.
            • immibis 5 days ago |
              My home router seems to have a stateful firewall and so does my cellphone in tethering mode - I don't know whether that one's implemented on the phone (under my control) or the network.

              Firewalling goes back in the control of the user in most cases - the other day we on IRC told someone how to unblock port 80 on their home router.

          • apitman 5 days ago |
            IPv6 isn't going to happen. Most people's needs are met by NAT for clients and SNI routing for servers. We ran out of IPv4 addresses years ago. If it was actually a problem it would have happened then. It makes me said for the p2p internet but it's true.
            • justahuman74 5 days ago |
              > If it was actually a problem

              It became a problem precisely the moment AWS starting charging for ipv4 addresses.

              "IPv4 will cost our company X dollars in 2026, supporting IPv6 by 2026 will cost Y dollars, a Z% saving"

              There's now a tangible motivator for various corporate systems to at least support ipv6 everywhere - which was the real ipv6 impediment.

              Residential ISP appear to be very capable of moving to v6, there are lots of examples of that happening in their backends, and they've demonstrated already that they're plenty capable of giving end users boxes the just so happen to do ipv6.

              • apitman 5 days ago |
                Yes and setting up a single IPv4 VPS as load balancer with SNI routing in front of IPv6-only instances solves that.

                Most people are probably using ELB anyway

            • immibis 5 days ago |
              What do you mean not going to happen? It's already happening. It's about 45% of internet packets.
              • paulddraper 5 days ago |
                Not happening for 55%.

                Try to connect to github.com over IPv6.

                • remram 5 days ago |
                  It doesn't work now so it's never going to work?
                  • apitman 5 days ago |
                    GitHub might work someday. Wide enough adoption that you can host a service without an IPv4 address will never happen.
                    • sroussey 5 days ago |
                      Honestly, it could be a feature rather than a bug…
                  • paulddraper 5 days ago |
                    If it doesn't work for a website as large as technically forward as GitHub in 2024, the odds are not looking good.
                • immibis 5 days ago |
                  Yes, that's one of the rare exceptions of a company trying to obsolete itself. It's actually one reason a bunch of people are moving away from Github.
              • apitman 5 days ago |
                The sun is about 45% of the way through its life.
            • ElijahLynn 5 days ago |
              "We are introducing a new charge for public IPv4 addresses. Effective February 1, 2024 there will be a charge of $0.005 per IP per hour for all public IPv4 addresses"

              https://aws.amazon.com/blogs/aws/new-aws-public-ipv4-address...

              • apitman 5 days ago |
                Yes and setting up a single IPv4 VPS as load balancer with SNI routing in front of IPv6-only instances solves that.

                Most people are probably using ELB anyway.

          • kelnos 5 days ago |
            > I was surprised to see that I can reach residential ipv6 lan hosts directly from the server. No firewalls, no nat

            No NAT, sure, that's great. But no firewalls? That's not great. Lots of misconfigured networks waiting for the right malware to come by...

    • ignoramous 5 days ago |
      > Direct sockets will have their uses for compatibility with existing applications...

      In fact runtimes like Node, Deno, Cloudflare Workers, Fastly Compute, Bun et al run JS on servers, and will benefit from standardization of such features.

        [WICG] aims to provide a space for JavaScript runtimes to collaborate on API interoperability. We focus on documenting and improving interoperability of web platform APIs across runtimes (especially non-browser ones).
      
      https://wintercg.org/
      • noduerme 5 days ago |
        Can you explain further... how does this improve upon websockets and socketIO for node?
        • arlort 5 days ago |
          Without a middleman you can only use web socket to connect to an http server.

          So, for instance if I want to connect to an mqtt server from a webpage I have to use a server that supports websocket endpoint. With direct sockets I could connect to any server using any protocol

      • synctext 5 days ago |
        This slowly alters the essence of The Internet, due to the permissionless nature of running any self-organising system like Bittorrent and Bitcoin. This is NOT in Android, just isolated Web Apps at desktops at this stage[0]. The "direct socket access" creep moves forward again. First, IoT without any security standards. Now Web Apps.

        With direct socket access to TCP/UDP you can build anything! You loose the constraint of JS servers, costly WebRTC server hosting, and lack of listen sockets feature in WebRTC DataChannel.

        <self promotion>NAT puncturing is already solved in our lab, even for mobile 4G/5G. This might bring back the cyberpunk dreams of Peer2Peer... In our lab we bought 40+ SIM cards for the big EU 4G/5G networks and got the carrier-grade NAT puncturing working[1]. Demo blends 4G/5G puncturing, TikTok-style streaming, and Bittorrent content backend. Reading the docs, these "isolated" Web Apps can even do SMTP STARTTLS, IMAP STARTTLS and POP STLS. wow!

        [0] https://github.com/WICG/direct-sockets/blob/main/docs/explai... [1] https://repository.tudelft.nl/record/uuid:cf27f6d4-ca0b-4e20...

        • Uptrenda 5 days ago |
          Hello, I wanted to say I've been working on a peer-to-peer library and I'm very much interested in your work on symmetric NAT punching (which as far as I know is novel.) Your work is exactly what I was looking for. Good job on the research. It will have far-reaching applications. I'd be interesting in implementing your algorithms depending on the difficulty some time. Are they patented or is this something anyone can use?

          Here's a link to an over-view for my system: https://p2pd.readthedocs.io/en/latest/p2p/connect.html

          My system can't handle symmetric --- symmetric. But could in theory handle other types of NATs ---- symmetric. Depending on the exact NAT types and delta types.

          • ignoramous 5 days ago |
            I read OP's thesis (which focuses on CGNAT), and one of the techniques discussed therein is similar to Tailscale's: https://tailscale.com/blog/how-nat-traversal-works

              ...with the help of the birthday paradox. Rather than open 1 port on the hard side and have the easy side try 65,535 possibilities, let’s open, say, 256 ports on the hard side (by having 256 sockets sending to the easy side's ip:port), and have the easy side probe target ports at random.
            • Uptrenda 4 days ago |
              this comment section has been the most useful and interesting thing I've seen for my own work in a very long time. And completely random, too. Really not bad. To me this represents the godly nature of this website. Where you have extremely well informed people posting high quality technical comments that would be hard to find anywhere else on the web. +100 to all contributors.
            • synctext 4 days ago |
              indeed, Tailscale was the first to realise this.

              We added specific 4G and 5G mobile features. these carrier-grade boxes have often non-random port allocations. "By relying on provider-aware IPv4 range allocations, provider-aware port prediction heuristics, high bandwidth probing, and the birthday paradox we can successfully bypass even symmetric NATs."

        • 3np 5 days ago |
          > By leveraging provider-aware (Vodafone,Orange,Telia, etc.) NAT puncturing strategies we create direct UDP-based phone-to-phone connectivity.

          > We utilise parallelism by opening at least 500 Internet datagram sockets on two devices. By relying on provider-aware IPv4 range allocations, provider-aware port prediction heuristics, high bandwidth probing, and the birthday paradox we can successfully bypass even symmetric NATs.

          U mad. Love it!

        • savolai 4 days ago |
          I don’t understand the topic deeply. Is this futureproof, or likely to be shutdown in a cat and mouse game if it gets widespread, like it needs to for a social network?
        • eternityforest 3 days ago |
          What if someone finds your IP address and sends you a bunch of crap? It would be very easy to use someone's entire monthly data allowance.

          Plus, it only works if you can afford and have access to cell service, and in those cases you or have access to normal Internet stuff.

          Unless cell towers are able to route between two phones when their fiber backend goes down. That would make this actually pretty useful in emergencies if a rower could work like a ham repeater, assuming it wasn't too clogged with traffic to have a chance.

    • bpfrh 5 days ago |
      You can also use WebTransport with streams for tcp and datagramms for udp https://developer.mozilla.org/en-US/docs/Web/API/WebTranspor...
      • IshKebab 5 days ago |
        Not peer to peer though presumably?
        • modeless 5 days ago |
          Yes and not in Safari yet either. Someday I hope that all parts of WebRTC can be replaced with smaller and better APIs like this. But for now we're stuck with WebRTC.
        • jauntywundrkind 5 days ago |
          There was some traction & interest in https://github.com/w3c/p2p-webtransport but haven't seen any activity in a while now.

          I'm pretty cocksure certain a whole industry of p2p enthusiasts would spring up building cool new protocols and systems on the web in rapid time if this ever showed up.

          • arthurcolle 2 days ago |
            Perfect timing with realtime AGI happening. Need lots of focus on realtime streaming protocols
        • bedatadriven 3 days ago |
          This a very early draft I'm following: https://wicg.github.io/local-peer-to-peer/
    • flohofwoe 5 days ago |
      There's also this new WebTransport thingie based on HTTP/3:

      https://developer.mozilla.org/en-US/docs/Web/API/WebTranspor...

      I haven't tinkered with it yet though.

      • modeless 5 days ago |
        Yeah, not in Safari yet and no peer-to-peer support. Maybe someday though! It will be great if all of WebRTC's features can be replaced by better, smaller-scoped APIs like this.
    • yesthisiswes 5 days ago |
      Awesome demo. I’ve really missed that map it’s been too long.
    • typedef_struct 5 days ago |
      This looks to use Web Sockets, not WebRTC, right? I don't see any RTCPeerConnection, and the peerServer variable is unused.

      I ask because I've spent multiple days trying to get a viable non-local WebRTC connection going with no luck.

      view-source:https://thelongestyard.link/q3a-demo/?server=Seveja

    • justin66 4 days ago |
      Not really peer to peer though, is it? The q3 server is just running in the browser session that shares a URL with everyone else?
      • modeless 4 days ago |
        Yes, it is. The first peer to visit a multiplayer URL hosts the Quake 3 server in their browser. Subsequent visitors to the same multiplayer URL send UDP traffic directly to that peer. The packets travel directly between peers, not bouncing off any third server (after connection establishment). If your clients are on the same LAN, your UDP traffic will be entirely local, not going to the Internet at all (assuming your browser's WebRTC implementation provides the right ICE candidates).

        It won't work completely offline unfortunately, as the server is required for the connection establishment step in WebRTC. A peer-to-peer protocol for connection establishment on offline LANs would be awesome, but understandably low priority for browsers. The feature set of WebRTC is basically "whatever Google Meet needs" and then maybe a couple other things if you're lucky.

        • justin66 4 days ago |
          This is neat. A little perverse, but neat.
    • eternityforest 3 days ago |
      Doesn't WebRTC still require an secure server somewhere?

      Direct sockets will be amazing for IoT, because it will let you talk directly to devices.

      With service workers you can make stuff that works 100% offline other than the initial setup.

      Assuming anyone uses it and we don't just all forget it exists, because FF and Safari probably won't support it.

  • xenator 5 days ago |
    Can't wait to see it working.
    • revskill 5 days ago |
      Why waiting ? What can you do with it ? Can't wait to wait for you.
  • bloomingkales 5 days ago |
    Can a browser run a web server with this?
    • apitman 5 days ago |
      I assume they would limit it to clients.
    • melchizedek6809 5 days ago |
      Since it allows for accepting incoming TCP connections, this should allow for HTTP servers to run within the browser, although running directly on port 80/443 might not be supported everywhere (can't see it mentioned in the spec, but from what I remember on most *nix systems only root can listen on ports below 1024, though I might be mistaken since it's been a while)
  • Jiahang 5 days ago |
    nice!
  • fhdsgbbcaA 5 days ago |
    Great fingerprinting vector. Expect nothing less from Google.
  • Spivak 5 days ago |
    Anything that moves the web closer to its natural end state— the J(S)VM is a win in my book. Making web apps a formally separate thing from pages might do some good for the web overall. We could start thinking about taking away features from the page side.
    • remram 5 days ago |
      This is beyond that, it's more a move to remove the VM than make JS a generic VM.
  • mlhpdx 5 days ago |
    I’m excited, and anticipate some interesting innovation once browser applications can “talk UDP”. It’s a long time in the making. Gaming isn’t the end of it — being able to communicate with local network services (hardware) without involving an API intervening is very attractive.
    • immibis 5 days ago |
      Indeed. I'll finally be able to connect to your router and change your wifi password, all through your browser.
      • lazyasciiart 5 days ago |
        Shhh, you’re giving my parents unrealistic expectations of how much remote tech support I can do.
  • chrisvenum 5 days ago |
    I found this issue indicating a bad idea for end user safety:

    https://github.com/mozilla/standards-positions/issues/431

  • jeswin 5 days ago |
    I prefer web apps to native apps any day. However, web apps are limited by what they can do.

    But what they can do is not consistent - for example, it can take your picture and listen to your microphone if you give permissions; but it can't open a socket. Another example: Chrome came out with an File System Access API [2] in August; it's fantastic (I am using it) and it allows a class of native apps to be replaced by Web Apps. As a user, I don't mind having to jump through hoops (as a user) and giant warning screens to accept that permission - but I want this ability on the Web Platform.

    For Web Apps to be able to complete with native apps, we need more flexibility Mozilla. [1]

    [1]: https://mozilla.github.io/standards-positions/ [2]: https://developer.chrome.com/docs/capabilities/web-apis/file...

    • 1oooqooq 5 days ago |
      nah. we need even less. i rather webapps because of the limitations. much less to worry about
  • kureikain 5 days ago |
    This means that we can finally do gRPC directly from browser.
  • hipadev23 5 days ago |
    What about WebTransport? I thought that was the http/3 upgrade to WebSockets that supported unreliable and out-of-order messaging
    • mmis1000 5 days ago |
      I think WebRTC data channels will be a good alternative if you want peer to peer connection. WebTransport is strictly for Client-Server architecture only.
  • tjoff 5 days ago |
    Great, so now a mis-click and your browser will have a field day infecting your printer, coffee machine and all the other crap that was previously shielded by NAT and/or a firewall.
    • jeroenhd 5 days ago |
      As long as they don't change the spec, this will only be available to special locally installed apps in enterprise ChromeOS environments. I don't think their latest weird app format is going to make it to other browsers, so this will remain one of those weird Chrome only APIs that nobody uses.
      • fensgrim 5 days ago |
        > special locally installed apps in enterprise ChromeOS environments

        There was https://developer.chrome.com/docs/apps/overview though, so this seems to be a kind of planned feature creep after deprecating former one? "Yeah our enterprise partners now totally need this, you see, no reasoning needed"

  • huqedato 5 days ago |
    Just now, when I have only recently switched permanently to Firefox...
  • troupo 5 days ago |
    Status of specification: "It is not a W3C Standard nor is it on the W3C Standards Track."

    Status in Chrome: shipping in 131

    Expect people claiming this is a vital standard that Apple is not implementing because they don't want web apps to compete with App Store. Also expect sites like https://whatpwacando.today/ uncritically just include this

    • meiraleal 5 days ago |
      Expect Apple claiming this is a not vital standard and Apple is not implementing because they don't want web apps to compete with App Store. Also expect sites like https://whatpwacando.today/ to obviously just include this
      • troupo 5 days ago |
        Which part of "is not a w3c standard and not any standards track" do you not understand?

        I am not surprised sites like that include Chrome-only non-standards, they've done this for years claiming impartiality

        • meiraleal 5 days ago |
          Cry me a river. Apple doesn't need you to defend their strategic and intentional PWA boycott.
          • troupo 5 days ago |
            Which part of "is not a w3c standard and not any standards track" do you not understand?

            Do you understand that for something to become a standard, it needs two independent implementations? And a consensus on API?

            Do you understand that "not on any standards track" means it's Chrome and only Chrome pushing this? That Firefox isn't interested in this either?

            Do you understand that blaming Apple for everything is borderline psychotic? And that Chrome implementing something at neck-breaking pace doesn't make it a standard?

            Here's Mozilla's extensive analysis and conclusion "harmful" that Google sycophants and Apple haters couldn't care less about: https://github.com/mozilla/standards-positions/issues/431#is...

            • meiraleal 5 days ago |
              What part of "cry me a river" you didn't understand? Don't go crazy because at least one of the browsers propose things that move the web forward. Geez, you should take a break from the internet. So many "?"
              • troupo 5 days ago |
                > Don't go crazy because at least one of the browsers propose things that move the web forward.

                No, they shape the web in an image that is beneficial to Google, and Google only.

                > Geez, you should take a break from the internet. So many "?"

                Indeed, so may "?" because, as you showed, Google sycophants cannot understand why these questions are important.

              • pjmlp 2 days ago |
                A generation lost in Internet Explorer....
            • nulld3v 5 days ago |
              There are a lot of reasons why people have such extreme differing opinions on this.

              I for one, am still salty about the death of WebSQL due to "needing independent implementations". Frankly put, I think that rule is entirely BS and needs to be completely removed.

              Sure, there is only one implementation of WebSQL (SQLite) but it is extremely well audited, documented and understood.

              Now that WebSQL is gone, what has the standards committee done to replace it? Well, now they suggest using IndexedDB or bringing your own SQLite binary using WASM.

              IndexedDB is very low level, which is why almost no one uses it directly. And it also has garbage performance, to the point where it's literally faster for you run SQLite on top of IndexedDB instead: https://jlongster.com/future-sql-web

              So ultimately if you want to have any data storage on the web that isn't just key-value, you now have to ship your own SQLite binary or use some custom JS storage library.

              So end users now have to download a giant binary blob, that is also completely unauditable. And now that there is no standard storage solution, everybody uses a slew of different libraries to try to emulate SQL/NoSQL storage. And this storage is emulated on top of IndexedDB/LocalStorage so they are all trying to mangle high level data into key-value storage so it ends up being incredibly difficult to inspect as an end-user.

              As a reminder: when the standards committee fails to create a good standard, the result is not "everybody doesn't do this because there is no standard", it is "everybody will still do this but they will do it 1 million different ways".

              • troupo 5 days ago |
                > Frankly put, I think that rule is entirely BS and needs to be completely removed.

                That's what Google is essentially doing: they put up a "spec", and then just ship their own implementation, all others be damned.

                Here's the most egregious example: WebHID https://github.com/mozilla/standards-positions/issues/459

                --- start quote ---

                - Asked for position on Dec 1, 2020

                - One month later, on Jan 4, 2021, received input: this is not even close to being even a draft for a standard

                - Two months later, on March 9, 2021, enabled by default and shipped in Chrome 89, and advertised it as fait accompli on web.dev

                - Two more months later: added 2669 lines of text, "hey, there's this "standard" that we enabled by default, so we won't be able to change it since people probably already depend on it, why don't you take a look at it?"

                --- end quote ---

                The requirement to have at least two independent implementations is there to try and prevent this thing exactly: the barreling through of single-vendor or vendor-specific implementations.

                Another good example: Constructible Stylesheets https://github.com/WICG/construct-stylesheets/issues/45

                Even though several implementations existed, the API was still in flux, and the spec had a trivially reproduced race condition. Despite that, Google said that their own project needed it and shipped it as is, and they wouldn't revert it.

                Of course over the course of several years since then they changed/updated the API to reflect consensus, and fixed the race condition.

                Again, the process is supposed to make such behavior rare.

                What we have instead is Google shitting all over standards processes and people cheering them on because "moving the web forward" or something.

                ---

                As for WebSQL: I'm also sad it didn't become a standard, but ultimately I came to understand and support Mozilla's position. Short version here: https://hacks.mozilla.org/2010/06/beyond-html5-database-apis... Long story here: https://nolanlawson.com/2014/04/26/web-sql-database-in-memor...

                There's no actual specification for SQLite. You could say "fuck it, we ship SQLite", but then... which version? Which features would you have enabled? What would be your upgrade path alongside SQLite? etc.

  • pjmlp 5 days ago |
    Yet another small step into ChromeOS take over.
  • arzig 5 days ago |
    The inner platform effect intensifies.
  • Asmod4n 5 days ago |
    Thank god they plan to limit this to electron type apps.
  • sabbaticaldev 5 days ago |
    so with this I would be able to create a server in my desktop web app and sync all my devices using webrtc
  • Uptrenda 5 days ago |
    I saw this proposal years ago now and was initially excited about it. But seeing how people envisioned the APIs, usage, etc, made me realize that it was already too locked down. Being able to have something that ran on any browser is the core benefit here. I get that there are security concerns but unfortunately everyone who worked on this was too paranoid and dismissive to design something open (yet secure.) And that's where the proposal is today. A niche feature that might as well just be regular sockets on the desktop. 0/10
  • hexo 5 days ago |
    Game over for security.
  • revskill 5 days ago |
    That means we can connect directly to remote Postgres server from web browser ?
    • zamadatix 5 days ago |
      So long as you do it from an isolated web app rather than normal page.
  • FpUser 5 days ago |
    All nice and welcome. At what point browser becomes full blown OS with the same functionality and associated vulnerabilities yet still less performant as it sites on top of other OS and goes through more layers. And of course ran and driven by one of the largest privacy invader and spammer of the world
    • anilgulecha 5 days ago |
      > At what point browser becomes full blown OS.

      Happened over a decade ago - ChromeOS. It's also the birthplace of other similar tech.. webmidi webusb Bluetooth etc.

  • badgersnake 5 days ago |
    It’s pretty clear Google are building an operating system, not a browser.
    • pjmlp 2 days ago |
      It is called ChromeOS, and its spread is helped by everyone that keeps pushing Electron all of the place.
  • grishka 5 days ago |
    Can we please stop this feature creep in browsers already?
  • demarq 5 days ago |
    Something tells me this is more to do with a product Google wants to launch rather than a genuine attempt to further the web.

    I’ll keep my eyes on this one, see where we are in a year

  • westurner 5 days ago |
    From "Chrome 130: Direct Sockets API" (2024-09) https://news.ycombinator.com/item?id=41418718 :

    > I can understand FF's position on Direct Sockets [...] Without support for Direct Sockets in Firefox, developers have JSONP, HTTP, WebSockets, and WebRTC.

    > Typically today, a user must agree to install a package that uses L3 sockets before they're using sockets other than DNS, HTTP, and mDNS. HTTP Signed Exchanges is one way to sign webapps.

    But HTTP Signed Exchanges is cancelled, so arbitrary code with sockets if one ad network?

    ...

    > Mozilla's position is that Direct Sockets would be unsafe and inconsiderate given existing cross-origin expectations FWIU: https://github.com/mozilla/standards-positions/issues/431

    > Direct Sockets API > Permissions Policy: https://wicg.github.io/direct-sockets/#permissions-policy

    > docs/explainer.md >> Security Considerations : https://github.com/WICG/direct-sockets/blob/main/docs/explai...