saurik 9 months ago

I do not have access to the original paper, but I would want to see how this compares to 802.11ah "WiFi HaLow".

(edit) OK, I got a copy from ResearchGate, and I misunderstood! I had failed to grok the part of the article where LoRa is now supported by the sx128x (as opposed to the sx126x) on 2.4GHz.

https://www.researchgate.net/publication/383692369_WiLo_Long...

> In this article, we introduce a new algorithmic framework called WiLo, designed to enable directional communication from Wi-Fi to LoRa, which employs signal emulation techniques to enable off-the-shelf Wi-Fi hardware to produce a valid 2.4 GHz LoRa waveform.

So, critically, and as far as I can tell this isn't in the summary article, this is purely unidirectional; and so, this isn't about being able to build a network that upgrades the range of WiFi with some tradeoffs: this is about being able to send data from existing WiFi hardware to existing LoRa hardware using a relatively minimal set of changes (though I still don't appreciate how this would practically be done to the existing hardware, and they apparently only simulated this with software-defined radio).

> The core innovation of WiLo lies in the signal emulation technique used to generate a valid 2.4 GHz LoRa waveform. Through sophisticated signal processing algorithms, WiLo transforms the standard Wi-Fi signals into LoRa-like wave-forms, while ensuring compliance with the LoRa modulation specifications. This enables the LoRa hardware to decode WiFi signals without requiring any modifications to the hardware itself. The emulation of LoRa waveforms is achieved by carefully manipulating the parameters of the Wi-Fi signals, such as the modulation index, spreading factor, and BW, to closely match the characteristics of LoRa modulation.

> We would like to emphasize that WiLo is directly supported among commodity devices, and the USRP-B210 devices are used only for evaluation purposes to measure low-level PHY information, which is inaccessible by commodity devices. For example, a commodity Wi-Fi card such as the Atheros AR2425 can replace USRP-B210 devices as the sender.

  • altairprime 9 months ago

    I want to highlight that this paper should be read in the same spirit as "guess what! we figured out how to cross-compile C into JavaScript using Emscripten" came across to everyone back in the day before our modern viewpoint where WebAssembly was taken for granted.

    It doesn't mean that this should be used, or should be the standard, but it absolutely does mean that this is possible to do within the terms of the 802.11g radio protocol spec, which no one had really realized and done the heavy lifting to demonstrate yet.

  • toomuchtodo 9 months ago

    > So, critically, and as far as I can tell this isn't in the summary article, this is purely unidirectional; and so, this isn't about being able to build a network that upgrades the range of WiFi with some tradeoffs: this is about being able to send data from existing WiFi hardware to existing LoRa hardware using a relatively minimal set of changes (though I still don't appreciate how this would practically be done to the existing hardware, and they apparently only simulated this with software-defined radio).

    This leads me to believe you could flip a switch and turn entire swaths of access points into a broadcast fabric for LoRa? Wifi networks meet software defined radio a bit.

  • nine_k 9 months ago

    OK, we have a Wi-Fi device that can talk to a LoRa device at a large distance. Now replace the LoRa device with another Wi-Fi device that talks the LoRa protocol. If mission is not accomplished, what's missing?

    • saurik 9 months ago

      They didn't say they could have a WiFi device receive this.

    • szundi 9 months ago

      ... that LoRa is not Wifi, slow as hell and saturates the air quickly.

      • fidotron 9 months ago

        You mean the LoRa bandwidth available in a given area is surprisingly low?

        I've only heard of it, never used it. Not surprised to hear it is slow, but I am on the other piece.

  • keeda 9 months ago

    > (though I still don't appreciate how this would practically be done to the existing hardware, and they apparently only simulated this with software-defined radio).

    It is my understanding that most modern baseband chips can effectively be considered "software defined radios", as most of the modulation/demodulation is performed by the firmware. While the researchers appear to have used a USRP (a dedicated SDR platform), it is conceivable their scheme could be accommodated in the firmware.

    • walterbell 9 months ago

      > most modern baseband chips can effectively be considered "software defined radios"

      Is there a comparably-priced SDR that could be used for WiFi data transmit/receive with GNUradio?

      • AleaImeta 9 months ago

        As far as I know, transmitting and receiving Wi-Fi traffic will never be possible using GNUradio, because you cannot meet the maximum 16 microsecond latency for sending an acknowledgment after you decoded a packet successfully.

        It's possible with an FPGA-based software-defined radio though: https://github.com/open-sdr/openwifi

      • keeda 9 months ago

        I'm not sure what you are asking it should be comparably priced to, but USRPs are on the higher end of the cost spectrum. Caveat: my experience here is extremely limited, but at one point I too was looking into affordable GNURadio-compatible SDR hardware that could transmit and receive (as opposed to the RTL-SDRs that can only receive) and I came across options like HackRF and LimeSDR.

        However, knowledgeable people also pointed out that these cheaper options make tradeoffs in the RF hardware that make it harder to get reliable performance for non-trivial uses. Their opinion was that the time saved in working around those limitations was well worth the extra cost of a USRP.

        • femto 9 months ago

          The BladeRF and ADALM-PLUTO are cheaper alternatives to the entry level (B2xx) USRPs. They use the same Analog Devices MIMO chip as the USRP, so are similar in capability.

        • stogot 9 months ago

          Are there mapped communities if SDR folks who I can connect with and want to build off-grid networks? I’d be interested in pursuing this

  • wkat4242 9 months ago

    Hmm. LoRa uses up-and downchirps. That would be pretty difficult to do with a WiFi radio that's meant to stick to predefined channels. But the radio is probably some kind of sdr.

  • NewJazz 9 months ago

    I thought this was a HaLow competitor too... Thanks for checking on that.

londons_explore 9 months ago

I just want speed to degrade gracefully down to 1 kbps or even 100bps. Ie I should be able to be 1 mile from my house, but still be imessaging over my home wifi.

Physics lets me do that with no additional transmit power (shannon's channel capacity formula, combined with signal power dropping off as a function of distance squared).

If suitable modulation was chosen (ie. some kind of CDMA), then a setup which can give 100 Mbit at 100 yards should be able to do 322 kbits at 1 mile, and 3kbps at 10 miles!

  • altairprime 9 months ago

    WiFi 7 finally extends wi-fi clients to be capable of bonding multiple radios together into a coherent link. This is critical to being able to bond in longer-range, lower-bandwidth channels in the future, and I'm certain they are considering how they can bring all of the 802.11 wireless specifications under one umbrella.

    However, this will also expose issues with encapsulation of "the connection", which can now vary exponentially in capacity. Operating systems and applications are coded to be 'carrier frequency agnostic', without any capability for a given connection to switch from "low data mode" (avoid unnecessary transmissions) to "high data mode" (do whatever you like), much less to "extremely low data mode" (i.e. push notifications and foreground app only) or to "extremely high data mode" (i.e. 4k / 240fps over 60GHz); all on a single dynamically-adjusting connection.

    Cellular fakes this today by saying "5G high data permitted?" but not being able to downgrade OS and app behaviors gracefully when the 5G connection is hella weak or overloaded, i.e. not fetching mail or not autoplaying movies or etc.

    • DaiPlusPlus 9 months ago

      Windows exposes connection-speed data to applications, and don’t forget the “metered connection” setting - which plenty (albeit probably not most) applications support - which goes a long way to solve the problem you’re describing.

  • hinkley 9 months ago

    The problem with that is you end up time division multiplexing. Packets for the distant client take as many microseconds to transmit as the high bandwidth nearby clients. The aggregate bandwidth for the system craters with more remote clients.

  • eternityforest 9 months ago

    Is that really true in practice though? Unless you're in the true middle of nowhere, by the time you get a mile out, there's going to be other people using the spectrum, and at low bandwidth they'll be using it for a long time.

    Current stuff like LoRa works because there's not many users and the protocols are optimized, but if everyone was iMessaging we'd probably need more spectrum.

    We can already do WiFi for miles with narrow beam antennas, we could make mesh network tech cheap if it was standardized and mass produced.

  • 05 9 months ago

    iMessage works over TCP, you're just going to be stuck in endless reconnect/retransmission loop.

    > a setup which can give 100 Mbit at 100 yards should be able to do 322 kbits at 1 mile, and 3kbps at 10 miles!

    That's not how noise floor works

  • adrian_b 9 months ago

    The oldest WiFi standards, at 1 Mb/s or 2 Mb/s, could easily establish point-to-point links at 50 km or even 100 km, with the condition of using good directive antennas mounted on high-enough masts.

    This could be used to implement bridges between LANs located at great distances one from the other. There were commercial products implementing such LAN bridges through WiFi.

    When an access point must transmit and receive omni-directionally, to reach any station placed randomly around it, that diminishes a lot the achievable range.

  • crazygringo 9 months ago

    How would you handle interference with many thousands of other WiFi routers within the 10 mile radius, operating on the same frequencies?

    • londons_explore 9 months ago

      CDMA keying.

      You have to be close to a router the first time to connect to it.

      And after that you both have some long (ie. 65536 bit) random key which is used as a modulation code for your transmissions.

      The receiver demodulates with the key to get the data out. Different key can get different data out of the same airwaves at the same time.

      Clock's need to be synced perfectly, which is in itself a big technical challenge.

      As the article slides to, low power signal reception is also challenging.

  • abracadaniel 9 months ago

    That’s my dream network. A long range, low bandwidth, decentralized network. Mesh would be cool, but even just being able to exchange with neighbors at the scale of 1-10mi would be amazing.

    • franek 9 months ago

      This sounds like what RNode devices for Reticulum networks appear to be able to do. (I haven't tried it for myself yet.)

      > RNodes can be made in many different configurations, and can use many different radio bands, but they will generally operate in the 433 MHz, 868 MHz, 915 MHZ and 2.4 GHz bands. They will usually offer configurable on-air data speeds between just a few hundred bits per second, up to a couple of megabits per second.

      > [...]

      > While speeds are lower than WiFi, typical communication ranges are many times higher. Several kilometers can be acheived with usable bitrates, even in urban areas, and over 100 kilometers can be achieved in line-of-sight conditions.

      ( https://unsigned.io/rnode/ )

      > Reticulum is the cryptography-based networking stack for building local and wide-area networks with readily available hardware. Reticulum can continue to operate even in adverse conditions with very high latency and extremely low bandwidth.

      ( https://reticulum.network/ )

      • abracadaniel 9 months ago

        That does look fascinating. I hadn’t seen this yet, thank you.

    • eternityforest 9 months ago

      Meshtastic works fine for that, it just doesn't scale beyond tiny text messages

    • sneak 9 months ago

      It would be illegal (at least in the usa); you’re not allowed to share your home internet, or function as a public utility.

      There are huge regulatory moats around everything that costs $20-500/mo recurring and is incurred by large percentages of the population. Internet access is a huge one.

      • wilted-iris 9 months ago

        Citation? There are a few large public meshes in the US. I'm unaware of anything that makes them illegal to run.

      • johnnyanmac 9 months ago

        Well that would explain the sad estate of government sanctioned Wifi. They were bought out, as usual, from people who just want an extra buck instead of properly serving the public's needs.

      • BenjiWiebe 9 months ago

        I know it's usually against the ToS to share your home WiFi, but this is the first I've ever heard that it's illegal.

Tor3 9 months ago

Did I understand the article correctly in that they simply managed to reach 500 meters with wi-fi? If so, I don't see what they have actually achieved. In the early days of 802.11b I regularly connected my wifi-enabled (via dongle) Palm PDA to open networks that were sometimes hundreds of meters away, and the airport free wifi I could use from 1.5km away (at least - it could be longer, it's just that the place I frequented was that far away). The usable distance started to shrink drastically as the airways got more crowded, and as soon as you could see tens of networks at the time then suddenly the cafeteria network was only usable from inside whereas before you could use it a couple hundred meters away, across the large square.

Of course, if it's about managing that in a crowded network space.. but the article was extremely short on details.

  • altairprime 9 months ago

    No. They reached 500 meters with LoRaWAN (no 802 spec), using an 802.11g (WiFi 3) radio.

Szpadel 9 months ago

let's assume that this takes off and it will become standard addition for our WiFi devices.

Given big range of this technology, how this handle air congestion when we would have hundreds maybe thousands of devices in range?

I expect low througput of this technology and for IoT that's usually fine, but when we need to share this spectrum with lot of devices we might quickly make this non operational. And this is even assuming we do not have some devices that request much more bandwidth that others.

Wirh WiFi 2.4ghz we already struggle with air congestion and quick Google shows that lora have 13 + 8 channels and if I understand it correctly some of them are used explicitly for joining network (?)

I think this technology is really cool only if it won't get much popularity

  • zamadatix 9 months ago

    People are responding to this with the mindset of watching 1080p TV not realizing 1 second of a 1080p Netflix stream will use 5x the total daily bandwidth of an IoT device reporting temperature once every 10 seconds for the whole day. It's entirely different use cases and the impact of congestion between the two is like talking about what matters to a garden on Mars vs Earth.

    The big limitation I see here, and where Wi-Fi has historically failed even with 802.11ah specifically built for the IoT use case and standardized back in 2015, is the "uses extra power" bit. Other protocols like LoRa are designed around minimizing power at the end stations. At the end of the day that's often a bigger deal than bandwidth for long IoT.

    • lucb1e 9 months ago

      > 1 second of a 1080p Netflix stream will use 5x the total daily bandwidth of an IoT device reporting temperature once every 10 seconds

      Don't have a Netflix file to test with but YT, the video data is 73KiB for the first second (tested by ffmpeg'ing the first second into a new file with codecs set to 'copy'). The page, however, seems to fetch about 9 megabytes in the first second of loading (various features still loading such as comments).

      Reporting temperature, let's say it's super secure with IV and MAC headers and trailers and another nonce for replay protection (all in all 3x 32 bytes) plus the actual value and other metadata (like which device the report is from) of, say, another 32 bytes, plus IPv6 headers etc. is a couple hundred bytes. Call it 256. There are a gross ten minutes in a day, coming to 144×256 = 36 KiB.

      Huh, that's pretty accurate when considering video data ("any second" rather than the first second specifically that needs to include the page weight). I had expected the video data to be vastly bigger compared to that sensor reporting just one value per ten minutes. That keyframe+video compression is really quite something; raw 24-bit would be 6 MiB rather than 73 KiB

      • zamadatix 9 months ago

        Video services use adaptive bitrate so a 1 second sample will give a very variable estimate. If you can watch the full video and monitor the NIC bandwidth (or, if you don't want to count headers, download the full video and look at the media info average bitrate) you'll get a more consistent number. For 1080p TV type content it's often a lot more than 262 MB/h. Raw should be 192010803*24 = ~149 MB since there are 24 frames in a second, keeping that magic of video compression alive despite the larger base rate.

        Netflix actually gives the value of 3 GB/hour directly for high definition content https://help.netflix.com/en/node/87 which is ~833 KB in one second (or about an average 6 mbps plus some delivery overhead) which simplifies things quite a bit though.

        Encryption is an interesting thought in that LoRaWAN will have encryption built in and the device would typically expect the gateway to handle producing that to IPv6 with traditional encryption and application headers. CTR is used so the packets don't gain much bloat, the minimum size is ~bytes and the maximum size is ~256 bytes. Temperature reports (4 byte float) typically fit in that 50 byte minimal packet category or 432 KB/day reporting every 10 seconds. So I suppose my estimate of 5x was a bit high after all, maybe 1-2x would have been more apt :). But it does beg the question, back to your point, what would that look like in this WiLo protocol? Standard Wi-Fi has a minimum frame size in the air is going to be closer to 100 bytes (minimum Ethernet length of 64 plus Wi-Fi bits). On top of that Wi-Fi doesn't usually act as the gateway like LoRa - does the IoT device now need to package everything in IPv6 + transport security itself and send it over the air with WiLo or is the expectation APs would be more gateway like for these clients? Dunno, wasn't able to read the actual paper in the link to see what is proposed :).

  • 486sx33 9 months ago

    I live on a pretty standard density street , there are a few semi detached homes mixed in. I’d still call it light density.

    I have 2 x 5ghz channels, 2 x 2.4 ghz channels, and then a repeater with another 2 and 2

    In the evening there is so much congestion on every available channel on either band that I can’t watch 1080p tv

    This long range thing sounds awful.

    • jeroenhd 9 months ago

      You don't need to watch 1080p TV to report the current temperature and humidity, or to receive a command to turn on a light bulb.

      As for channel congestion, check if your WiFi repeater is in mesh mode or not. If not, it literally halves the throughput on your WiFi network, that seems to be already over-congested by whatever is messing with your channel settings. Based on your description of the area, if your 5GHz somehow ran out of space, something seriously weird is going on. Maybe some non-WiFi-device is using the 2.4/5.2GHz band to transmit data? I know stories of cheapo baby monitors wiping out entire neighbourhoods, for instance.

      • pbhjpbhj 9 months ago

        We have some bad channel congestion (UK, terraced house, relatively narrow street) but part of the problem appears to be ISPs lock the channels, there are completely free channels but the settings on the router don't allow one to select a channel, it auto-picks. Weird. The algo appears to be "choose channel 1 even if it's totally congested"; for most of the neighbours too (4 or 5 different ISPs).

        I can only assume that they sell "super-duper routers with greater bandwidth" where they make the box look like a spaceship and actually enable a useful channel selection algorithm? Is there an actually good reason for the channel selection being awful?

        • jeroenhd 9 months ago

          > Is there an actually good reason for the channel selection being awful?

          The only one I can think of, based on my experience back in the day, is to prevent people from picking 2.4GHz channels that aren't 1/6/11. Modern routers often come with visualisations to show why this is a bad idea, but I've seen plenty of cases where people went "wow, these 1/6/11 channels are super crowded, I'll just use channel 3 because nobody else is using that" and making the situation much worse by now overlapping and interfering with two common channels instead of just one.

          Part of the spec also mandates that for wide 802.11n channels, you can't just force yourself into using a wide spectrum when there's not much spectrum to go around. On the 5.2+GHz band there are also legal restrictions (such as DFS, to not interfere with radars, and sometimes transmission power limits on specific subsections of the band).

          Some larger corporate WiFi installations will automatically adjust channels across many access points to optimise coverage and minimise interference. Perhaps ISP APs will do something similar, as ISPs have a somewhat accurate map of what frequencies are used in a geographical area. That seems rather overbearing to me, though.

          I still can't think of a reason to not let users pick one of the "good" bands themselves. Some kind of automatic frequency selection is a solid default, but I don't see why users shouldn't get the freedom to override that default.

        • crazygringo 9 months ago

          Is there any reason you can't just plug your own WiFi router into its Ethernet port so you have control over your WiFi?

    • sneak 9 months ago

      This means something is wrong with one or more of the stations on those bands. It’s not normal.

  • neuroelectron 9 months ago

    It could be silently adopted to allow longer distance for things like map apps that only need a few kilobytes for wifi triangulation.

malfist 9 months ago

I'm curious what the speed would be, kinda strange the post mentions "mentioning speed" but not what speed is maintained

  • nicpottier 9 months ago

    This looks to be about running LoRa like networks on WiFi hardware. Speed on LoRa is not something talked about much as it is more like SMS message passing or the like than IP networking.

    • malfist 9 months ago

      Probably why it was taking about IoT use. 500 meters for a couple hundred baud connection doesn't seem too ground breaking. Off the shelf 900mhz radios can easily achieve that

      • brookst 9 months ago

        It’s about WiFi to LoRa interop, which is nice but not world changing.

        • willcipriano 9 months ago

          For smart home applications this could be big. No longer need a hub.

          • mschuster91 9 months ago

            Most Smart Home stuff is either Zigbee, Bluetooth or Wifi.

            LoRa stuff is ... hell I can't remember when I've seen that stuff outside of monitoring for remote cabins and the likes.

      • MostlyStable 9 months ago

        Yeah, the main draw seemed that you don't need a special receiver and that standard networking gear would work, but.....LoRa hardware is not very expensive or complicated.

  • calibas 9 months ago

    Assuming it's the same as LoRa, up to 50 kbit/s.

rajnathani 9 months ago

This is not a part of any of the official upcoming Wi-Fi standards/specs, unlike how the title of the article makes it seem.

dzhiurgis 9 months ago

Baffles me Starlink terminals don't ship with lora and halow. Maybe too soon, but obvious improvement for remote farms, etc.

sneak 9 months ago

Isn’t LoRa patented and proprietary?

brcmthrowaway 9 months ago

Isnt there already 802.11ah?

  • altairprime 9 months ago

    LoRaWAN (has no 802 spec) and Wi-Fi HaLow (802.11ah) both share a target market, but use different underlying protocols. This isn't meant to be "we have created a new standard" or "we prefer one or the other standard", this is simply "holy crap, we figured out how to make an 802.11g radio emit LoRaWAN packets?! and no one has ever written a paper about doing that before!".

jessriedel 9 months ago

The overwhelming issues with WiFi are

1. It is slow to connect, taking multiple seconds rather than a few milliseconds. (Wifi unreliability would have much less practical impact if there was rapid reconnect.)

2. The lack of a sufficiently flexible standard interface for logging in and accepting terms, leading to the terrible captive portal workaround.

I cannot for the life of me understand why the standards committee cares much about various other minor improvements when these issues are still unsolved after two decades. (Similar complaints can be made about Bluetooth.)

  • tzs 9 months ago

    > I cannot for the life of me understand why the standards committee cares much about various other minor improvements when these issues are still unsolved after two decades.

    It's different people.

    WiFi is a pretty wide field. There are plenty of people whose interests and/or qualifications only extend to part of it. Should they just sit around twiddling their thumbs if the parts they can contribute to aren't the parts with the biggest issues?

    • jessriedel 9 months ago

      You can re-phrase my main question as: why is there enough interest in doing these minor improvements when there is apparently little interest in fixing the major and long-lasting deficiencies?

      • MerManMaid 9 months ago

        In the context of this thread, IEEE aren't the ones who can improve such things. WiFi 6E and WiFi 7 both make considerable improvements on connect time but at the end of the day, they deal with the "backend" if you will.

        They have no control over crappy WiFi NIC cards that delay connections or how Apple displays captive portals.

      • 14 9 months ago

        I think what you call minor improvements would be considered significant to others who may benefit from it.

      • cheema33 9 months ago

        > why is there enough interest in doing these minor improvements...

        Why are people working on improvements to wi-fi at all, when there are people around the world dying of hunger?

        If you can answer that, I think you'll be able to answer your own question.

        • jessriedel 9 months ago

          No. Ending hunger is hard.

  • nine_k 9 months ago

    1. Wi-Fi is of course capable of reconnecting fast, so you don't even notice it. When you see a multiple seconds worth of a connectivity gap, it's likely not because Wi-Fi is stupidly waiting. Most probably it's radio interference, or the access point can be overloaded and have not enough resources to service all the connections, so it drops some.

    2. Wi-Fi is a transport layer; equally TCP and even HTTP do not offer a generic mechanism for logging in and accepting terms. This needs to be addressed at a different level, but I totally agree, it would be great to have a reasonable standard that's open and is better than a captive portal.

    • jessriedel 9 months ago

      1. When I have looked into this, it turns out the basic WiFi standard has rules about polling rate (allegedly to save power). Like, the standard is literally something like “poll for a few hundred microseconds every couple seconds”. Macbooks even have special workarounds that decrease the connection time by ~half by mildly abusing the standard.

    • ajb 9 months ago

      Or it can be multiple access points on the same SSID without 802.11r set up properly

  • dataflow 9 months ago

    > It is slow to connect, taking multiple seconds rather than a few milliseconds.

    What is the reason for this?

    • zamadatix 9 months ago

      For standard Wi-Fi the biggest factors for a fresh association are:

      - Discovery. You have to wait to see which saved networks are broadcasting. Broadcasting more often = less efficient airspace for already attached clients. Broadcasting less often = longer delay for clients to "see" the network when they start listening.

      - External authentication. E.g. if you're doing RADIUS auth or MAC auth with an external database instead of a PSK exchange there is extra time in setting up this exchange and then waiting for the external authenticator to validate it.

      - DHCP / NDP. Wi-Fi assumes you more or less want to emulate a standard Ethernet + IP session but over this new fangled air connection. This is an additional delay for an additional exchange with the services responsible for this. Typical clients e.g. Windows will also perform extra address duplication checks slowing things further.

      There are some extensions in more modern Wi-Fi standards for clients that want to "sleep" for long periods, immediately do some stuff, then sleep for long periods (like IoT). Particularly TWT (target wait time). These, and more, are already found in purpose built protocols like LoRa/LoRaWAN though.

      • londons_explore 9 months ago

        Discovery could take up to 100 milliseconds.

        All the others, if properly implemented, are speed-of-light things. Eg. RADIUS auth to an external database on the same physical site should easily be doable within 1 millisecond. It's not like that database has a multi-second queue of other users to connect first.

        • zamadatix 9 months ago

          While 100 milliseconds (well, slightly more actually - 100 time units != 100 ms, it's just very close) is the default beacon interval, most clients will scan for longer than that to ensure they hear the best SSID (and AP for that) before initiating a connection just to throw it away/immediately roam anyways. This allows higher beacon intervals to work without the clients thrashing themselves between advertising stations. Also, not every beacon will actually make it through heard so connections generally assume they shouldn't just go with the first time they hear it.

          As for how a RADIUS auth should take 1 ms... sure, and updating macOS on an M3 shouldn't take an hour either. I'm not saying how tech should be, I'm saying how it is and which steps often add significant latency as implemented. Even a dedicated Aruba ClearPass authentication server on prem scaled for 25k clients can't handle 1,000 auths/second (ignoring that AD would fall over before then).

          Usually the drivers are also part of the problem (both on the AP and the client). Same with OS software around it. Things that should be instant just aren't. E.g. go try to disconnect and reconnect to the same SSID on macOS via the GUI. If you do that in ~1 second you should get an error - not a delay, an error that it hadn't synced up with its own connection state and couldn't possibly figure out how to try to connect. None of that is actually Wi-Fi, if you implement an IoT device with an embedded radio and sidestep a lot of those layers suddenly the protocol is much quicker to connect (still not as fast as some simpler ones though).

        • Two4 9 months ago

          You've never dealt with enterprise networks where the auth server lived on a different continent

          • nine_k 9 months ago

            A different continent is usually a few hundred milliseconds away. (The worst delay I had in my life was between me in Europe and the target machines in Japan, and the round-trip still was about a second.)

            The response time of an under-provisioned or poorly implemented API endpoint on a machine in the same rack can be multiple seconds though :(

          • lucb1e 9 months ago

            an edge case you can come up with != typical experience. What you're saying is not far fetched, but also not the common case, especially at home and it's not like this latency only occurs on RADII networks

      • dataflow 9 months ago

        I feel like those don't explain it though. The most common delay people see is when they click Connect on a network without external authentication. It easily takes a few seconds, but surely that's not all due to DHCP etc.?

      • lucb1e 9 months ago

        That finally explains why this super cheap crappy device where you need to configure the BSSID and IP address statically is so insanely fast to connect to WPA2-PSK compared to every other system no matter how beefy. Thanks! I didn't realize it wasn't inherent to WiFi but all the extras, at least once you know which router you're going for (and the connected-to-before-sleep-mode router is definitely a good bet which it could actively query/probe for)

  • azernik 9 months ago

    2 is rather irrelevant to the intended use case: devices connecting to a network with pre-shared credentials. Think cash registers and crop humidity sensors connecting to the internet, not cafe wifi.

    1 is kind of unavoidable if you want a massively shared medium, mildly reliable connections once set up, and decent throughput.

    • jessriedel 9 months ago

      1 is not unavoidable. I’ve never gotten a clear answer on the explanation, but the most reasonable ones involve power-saving and frequency-saving strategies (only poll for new connections for a few hundred microseconds per second). And I think these are eminently solvable.

      On 2, my question isn’t “why haven’t they fixed #2 for this use case?” it’s “where does the effort to address this relatively small use case come from while #2 remains unaddressed “?

      • azernik 9 months ago

        A few hundred microseconds is not enough time to send an announcement at the low bitrates required for noisy channels, and then you need to multiply it by the number of cells in a collision domain. This is a hard limit of information theory and physics.

        And it is possible for the vast array of computer scientists in the world to work on more than one thing at a time; this particular work was done by a small team, whose specialties are in any case not suitable for work on better open authentication/consent protocols.

        • jessriedel 9 months ago

          My memory of the consensus, the last time this topic came up, is that the number is in fact less than a hundred microseconds. But it doesn't matter for my point if it's a few milliseconds or whatever. The problem is solvable.

  • Klasiaster 9 months ago

    A major problem is that one can only have a single connection. It would be nice to do WiFi Direct or AdHoc mode while being connected to an AP. Being able to use WiFi without losing the AP connection (or having a standardized AdHoc channel for discovery) would replace uses of Bluetooth in consumer devices where Bluetooth is used for the initial connection to join a WiFi - some devices rely on WiFi Direct but the UX is very bad because the AP connection gets lost.

  • sneak 9 months ago

    It (almost always) lacks forward secrecy or, indeed, any encryption at all when using unauthenticated networks.

    These are much bigger problems.

    The fact that they haven’t been fixed is so glaring that one can attribute it to enemy action, not carelessness.

  • tjoff 9 months ago

    1. Can't remember when this had any effect on my use. Unreliability wouldn't improve either if you cut the connection and reconnected as it would bring down all open connections. No thanks.

    2. Been years since I've seen one of those. The use-case is pretty much being abroad or in a really remote area with bad cell reception. And even in those cases it seems those captive portals are going out of fashion.

    All in all, so far down the list that I probably wouldn't think of them even if I tried.

    • habosa 9 months ago

      It’s been years since you’ve seen a captive portal? I can’t remember the last time I saw a WiFi network that didn’t have a password that also had no captive portal.

      I’m pretty sure every airport in the US has captive portal WiFi.

      • Marsymars 9 months ago

        I expect their point, which jives with my experience, is that cell coverage is now good enough that there’s no need to connect to random wifi networks where you have to deal with their portals and contend with worse performance than your cell connection.

        • wongarsu 9 months ago

          In the US signal strengtheners seem to be common in public buildings. But in Europe it's common for big buildings (think convention centers, large multi-story shops, etc) to offer free wifi but have bad cell signal on the inside.

          Additionally mobile data tends to be expensive, so lots of people in the lower half of the income spectrum will go with a smaller data plan and use free wifi at every opportunity. Which encourages places to offer it to get those people as customers.

          • jltsiren 9 months ago

            And in some countries, mobile data is cheap and available almost everywhere. Many people don't connect their phones to a WiFi in their daily life. Which may then become a huge security issue, as the phone may not download software updates on mobile data with default settings, under the false assumption that mobile data is expensive.

        • johnnyanmac 9 months ago

          I can't even get "5g" in my own house. I had to order a signal strengthener (fortunately for no extra cost)... That connects to my wifi. And the speeds make me wonder if we really came that much farther from 3G.

          The US mobile infrastructure is an absolute sham, even in the outskirts of a large city. I can only imagine how some entire states fare.

        • nine_k 9 months ago

          I have an unlimited 5G plan in the US, works like a charm, etc. But once I cross an ocean, the connectivity is suddenly very limited, and mobile data is expensive. So, until I buy a local SIM card, or maybe for a few hours of a layover in an airport, is where airport Wi-Fi may be really helpful.

          • Marsymars 9 months ago

            Take a look at eSIMs. They're crushingly convenient.

        • jayd16 9 months ago

          Doesn't the cell antenna use significantly more power than WiFi?

          • Marsymars 9 months ago

            In theory, yes, in practice, YMMV. I've never found myself in situations where I felt it was specifically worth connecting to Wi-Fi in order to conserve battery life.

      • olyjohn 9 months ago

        I haven't seen one either. But I haven't needed to connect to a public WiFi point in years. That might be the difference. Usually where there is public WiFi I can just tether on my own phone.

est 9 months ago

tl;dr exsisting Wi-Fi devices goes long range with LoRa protocols

The catch: additional power consumption.

Eric_WVGG 9 months ago

My god, who came up with this name?? “Wi-Lo” sounds like “LOW range”

  • malfist 9 months ago

    And if you google it, google assumes you're searching for willow