The bunkbed conjecture is false
187 points by surprisetalk 3 days ago | 69 comments
  • solidsnack9000 2 days ago |
    It is helpful that the post links to the Wikipedia page: https://en.wikipedia.org/wiki/Bunkbed_conjecture

    Reading that and then rereading the post, it all made a whole more sense: why the conjecture is intuitively appealing and why the computational approach doesn't readily result in a proof.

    • knodi123 2 days ago |
      I still don't get it. So you make this graph, with a random number on each edge. Then you duplicate that graph, stack them like a bunkbed, and connect every lower vertex to the one above it. Then you delete all edges with a number below some threshold. Wouldn't the upper and lower graphs still be identical? Since you had the same edge weights in both? So the route from any upper node X to any upper node Y should be 1 less than the route length between X and the point beneath Y.

      What did I miss?

      • cochne 2 days ago |
        There is no thresholding, you delete edges randomly based on the probability assigned.
        • knodi123 2 days ago |
          Thanks, that's an important distinction.
      • TheRealPomax 2 days ago |
        You missed the part where the conjecture is about subgraphs: after forming the "bunk bed", we start randomly deleting edges based on edge probability, so there may no longer be a direct path from x to y in what was originally the upper graph.
        • torstenvl 2 days ago |
          That seems irrelevant since then there would be no path in the lower graph either. This must be true because deletions are based on probability and the corresponding edges between upper graph and lower graph are required to have the same probability.
          • arjvik 2 days ago |
            Same probability, but still independently sampled.
          • calfuris 2 days ago |
            Having the same probability of deletion doesn't mean that they will necessarily share the same fate.
            • torstenvl 2 days ago |
              Ah. The Wikipedia description of the conjecture does not say that each edge is independently randomly deleted (or not) based on the probability assigned to that edge. I just assumed the probabilities were the weights in a weighted graph and had some other effect (e.g., likelihood of success traversing that edge, comparative likelihood of choosing that edge vs another when traversing the graph, etc.).
          • TheRealPomax 2 days ago |
            Each edge is independent from each other edge: you don't roll a die and then go "4: remove every edge that's 4 or less", you make a separate roll for each edge.
    • yread 2 days ago |
      I don't know. To me it sounds like it's obvious the conjecture doesn't hold. if you have a path in the upper bunk that gets broken you are screwed but if that path is broken you have the option to switch to path in the lower bunk at ANY moment. So you have n+1/n higher chance of a break but n-1 ways how to avoid it
      • supernewton 2 days ago |
        The conjecture is true for all small graphs that they tried, so if it's "obviously" false to you then something went wrong with your intuition somewhere.
        • zmgsabst a day ago |
          Or everyone else’s, since his intuition arrived at the right asymptotic answer.
      • tech_ken 2 days ago |
        I'm pretty sure you're sampling the "bedposts" as well, the upper and lower subgraphs aren't just connected at every node; I understood the rough (and incorrect) intuition to be like:

        P[nodes in upper bunk connected] > P[nodes in lower bunk connected] * P[sufficient "bedpost" edges survive connecting upper and lower]

        Since

        P[nodes in upper bunk connected] = P[nodes in lower bunk connected]

        And by definition

        P[sufficient "bedpost" edges survive connecting upper and lower]<1

      • winwang 2 days ago |
        Intuitively (subjectively) why this argument doesn't work: if you switch to the path in the lower bunk, you can switch to the upper bunk "at any moment", regardless of if there is a path which exists on the lower bunk.

        In this example, a1 is connected to both c1 and c2:

          a1    b1 -- c1
          |     |     |
          a2 -- b2 -- c2
        
        If, instead of deleting edges, we assigned a distance d = 1/p, then you'd expect (and probably easily prove) that the distance to the target vertex is at most the distance to its partner in the other bunk. The fact that this intuitive "problem translation" doesn't actually translate to probabilities is quite surprising (to me)!
      • scottmsul 2 days ago |
        > if you have a path in the upper bunk that gets broken you are screwed

        Counter-factuals like this don't apply when talking about average probabilities. If you cross over, it's an identical graph with identical probabilities. idk, to me it seems really counter-intuitive that the opposite bunk's node would be easier to get to than the current bunk's node.

    • auggierose 2 days ago |
      Yeah, the introductory sentence in the post doesn't really do a good job explaining what the bunkbed conjecture is. If I was interested in it, I would have to look it up somewhere else.
  • keepamovin 2 days ago |
    Wow, funny how our intuitions fail on objects with 7523 vertices and 15654 edges
    • fbn79 2 days ago |
      and funny 7523 is a prime number.
      • f1shy 2 days ago |
        But 16564 not!
      • keepamovin 2 days ago |
        Nice! 15654 = 2*3*2609

        and (7253/15654)**(phi*pi)*100 ~ 2

        haha :)

    • y-curious 2 days ago |
      It was always obvious for me. But they did call me gifted in the third grade.
      • keepamovin 2 days ago |
        Hahaha it was obvious that it was false for you???
  • dataflow 2 days ago |
    > This is completely obvious, of course!

    Could someone explain why the conjecture seemed obviously true? I have no background with it, but just reading the description here, I was caught off guard by the sentence. What made it seem obvious?

    • keepamovin 2 days ago |
      Why is it "obviously true" (that lower probabilities of connection between than on levels)?

      Because if it wasn't true then that would imply that V on different levels (as reached presumably by a canonical DFS tree // spanning tree) were closer (in terms of paths) than V on the same level, which would mean that those V should have "more likely" been on the same level (as however measured).

      TL;DR - 'obviously true' because there's a level between them so (on average) of course!

    • jprete 2 days ago |
      The upper and lower bunk graphs are symmetrical in their probabilities, so it would intuitively seem like connecting to the same bunk's point would be easier/more likely than connecting to the other bunk's point. The first would require zero (or an even number of) crossings, the latter would require one (or an odd number of) crossings. Every crossing would seem to only decrease the odds of the path existing, or at best leave it the same.

      To me it smells like a trap, though. This feels like exactly the kind of problem where some algorithmically chosen infinite graph would break the conjecture, possibly through some kind of NP-hardness-proof-style transformation of the problem into another form, and who knows, maybe the Axiom of Choice gets involved too (but probably not based on the headline!).

      • sweezyjeezy 2 days ago |
        The (disproven) conjecture is for finite graphs.
      • joelignaatius 2 days ago |
        I may be missing something. If theres edges u1 to v1 and symmetrical graph u2 to v2 then any post has to be between u1 to u2 for any u. Which means traversing a post would essentially be adding an edge to the graph (no matter the probability). It seems intuitively obvious (which is stated here as incorrect) that going through a post would be more expensive.
        • elseweather 2 days ago |
          Yes, it seems intuitively obvious, which is why mathematicians spent a long time trying to prove that the conjecture was true. It turns out The conjecture is false in a non-obvious way. The result described in the blog post is a specific counterexample: the conjecture fails, just barely, for a specific graph with several thousand nodes and edges. It's not the kind of counterexample you would intuit in your head or even on a whiteboard.
          • joelignaatius 2 days ago |
            It would seem the next logical step would be to come up with a series of examples where the conjecture fails and then extrapolate from there what new rules you come up with. And then possibly attempt to draw an isomorphism from another field. At some point mathematics will turn into an LLM problem (I know hype cycle). I'm interested in knowing if there are branches of mathematics which are near inaccessible to non computational methods of proof. And then there would be levels of mathematics where the proof itself would be true, but it would be much like me asking you for the intuition except it would be man versus the computer. If you do this level of mathematics and you put it in a box you have some real world result the operations of which are non comprehensible but demonstrably have an analogy to something understandable. Schrodinger's AI.
    • ted537 2 days ago |
      I also know nothing but here's something missing from the blog post:

      "A random subgraph of the bunkbed graph is then formed by independently deleting each edge based on the assigned probability."

      So the (apparently incorrect) intuition is that an (upper<->lower) connection starts with an extra edge in the connection, so an (upper<->lower) connection has a greater risk of disconnect via random edge removal. Therefore, a same-level connection is more likely to remain after the random edge removal.

    • cashew22 2 days ago |
      Here is my reasoning: start with two vertices u and v in the first floor and the vertex corresponding to v in the second floor (call it w). u and v are connected if, after deleting some edges randomly, there is still a path from u to v, similarly for u and w. But for any fixed path from u to w, you can get a shorter path from u to v by just going across between the floors one less time. Because a shorter path is more likely to survive than a longer one (by a factor of p), it makes sense that overall, the probability of some path from u to v surviving is larger than some path from u to w surviving. But, that reasoning breaks down, in part because when you drop the last crossing edge, there are many ways of adding it back in, so each path from u to v might correspond to many paths from u to w.
    • stonemetal12 2 days ago |
      Intuitively the longer a path is the more likely it is to be broken when you start randomly removing edges, and before you remove edges the shortest path from U1 to V2 is the shortest path from U1 to V1 plus a post. Therefore it seems like the path from U1 to V2 is more likely to be broken than the path from U1 to V1.
  • geminger 2 days ago |
    So it's been debunked.
  • andrewflnr 2 days ago |
    While I wouldn't accept a probabilistic "proof" of a theorem like this, it does seem clear to me that those results are important for directing the focus of research, especially in cases where they go against intuition. Given that most of the math community was barking up the wrong tree, even if these guys only had the probabilistic result, surely that would eventually help someone find the right proof? That's at least worth publishing as a letter or something, right?

    Ed: reflecting on my first sentence, I guess I tend to think of proofs as being at least as important a product of math as the raw truth of a statement. A fact is a fact, but a proof represents (some level of) understanding, and that's the good stuff. Experiments are potentially good science, but usually not math.

    • bubblyworld 2 days ago |
      Can you elaborate? It's possible to prove probabilistic results about discrete objects rigourously - in fact this is a reasonably common technique (going back to work by Paul Erdos, and probably further). It can give you concrete information about the existence or non-existence of these objects (if something exists with non-zero probability in a finite set there has to be at least one example). It's not a "less valid" way to prove things, just a particular choice of approach. Sometimes it makes things easier, sometimes it doesn't.
      • ilya_m 2 days ago |
        We are talking about different kinds of "probabilistic" proofs. Igor Pak's blog post and the root message in this thread refer to statements that are validated only probabilistically. The probabilistic method, by Erdos and others, is a proof as rigorous as any. "Probabilities" in the this method are just a convenient apparatus for counting objects.

        (As an aside, there's important distinction with statements that are believed to be true based on empirical evidence, such as hardness of factoring or the Riemann conjecture. If the bunkbed conjecture was refuted based on Monte Carlo simulations, it'd be possible to reduce the level of uncertainty arbitrarily low by throwing more resources at running more simulations.)

        • bubblyworld 2 days ago |
          Right - I admit I stopped reading the blog post at the point they linked to the paper and started reading that instead (which contains a rigorous proof). Having read the rest of the blog now I can see where the OP is coming from.
      • samatman 2 days ago |
        What you're describing is different from the distinction the parent is making, which is between providing a high probability of evidence that a conjecture is true, versus proving the conjecture true through a rigorous formal proof.

        Often there are three useful states: none, some, and all. You describe a rigorous proof-of-some, because a proven probability of not-zero guarantees a counterexample exists. It's still news when or if the counterexample is found, but given the standard of rigorous proof is met, it does exist. Of course another option is to construct a counterexample, as was done here.

        The case of probability discussed in the Fine Article was rigorously demonstrating a 99.99% chance that the probability is not zero. That's informative but it isn't proof. Proof that the probability is not zero, absent the counterexample which we now know (rather than expect on good grounds) exists, is also a proof. But that isn't the case being discussed here.

        • bubblyworld 2 days ago |
          Yeah, I missed that there was a discussion of empirical evidence in mathematics in the blog post (rather than the probabilistic method).
      • feoren 2 days ago |
        > if something exists with non-zero probability in a finite set there has to be at least one example

        I think you mean an infinite set here. If I have a set containing one single coin, and I flip every coin in my set, both heads and tails have a non-zero probability to exist within my flipped set. But there's certainly not an example of both heads and tails in my single-coin set. Indeed for any finite set of coins, there's some chance that either heads or tails never shows up.

        However, if I have an infinite set of coins and I flip them all, certainly both heads and tails must exist (although I wouldn't be surprised if even this depends on the axiom of choice or something.)

        • zamadatix 2 days ago |
          I think GP did mean finite, just not perfectly clearly how (and also all in missing what the article was focusing on a bit). I.e. if there is some finite set and there is a non-zero probability of something existing in it then there must be at least one _possible_ example of how this can occur - otherwise the probability could not be non-zero. Not in that a single given instance of the finite set must show that result, just that one possibility of what it could contain must have.

          What you're saying about the infinite set version of GP's statement depends on what flavor of statistics you're measuring by. E.g. a frequentist approach would give the probability of something that occured 50 times out of \inf as 0 while other approaches may say a small non-0 probability.

          Flipping an infinite set of coins is actually an inverse view of what's described above and not necessarily "certainly" even though the probability is 1 ("almost surely" rather than "certainly" is the wording) for the same reasons of a non-empty set not necessarily nudging the probabilities of an infinite sequence. The Wikipedia article explains this better than I could https://en.wikipedia.org/wiki/Almost_surely

          • bubblyworld 2 days ago |
            Yes, I did indeed mean finite. I was referring to the probabilistic method in combinatorics (https://en.wikipedia.org/wiki/Probabilistic_method). It's a non-constructive technique for proving existence theorems. Of course, this kind of thing works for infinite sets too, but there are some subtleties when it comes to sets of measure zero that kind of obscure the point.
            • feoren 2 days ago |
              > Yes, I did indeed mean finite.

              Then you're wrong. I gave you a counterexample to your statement, which you have ignored. If you care about being correct on this, you should ruminate on my comments, because what you are currently saying is wrong.

              > Probabilistic method

              Neither example on that Wikipedia page shows proving the existence of something from a finite set because it's probability is non-zero.

              The first example is actually showing that if the probability is zero from a finite set then there can be no examples. It's an example of #4 below. The critical point is "The number of monochromatic r-subgraphs... is a non-negative integer, hence it must be 0 (if strictly less than 1)." It's the inverse of your statement (and the contrapositive of mine, hence it has the same truth value): "if the probability of something existing in a finite set is 0, there must be no examples". That's very different!

              The second example is from an infinite set. It's an example of #1 below The key sentence is: "For sufficiently large n, the probability that a graph has both properties is positive". We're given g and k as parameters to the problem, and we show that there exists some n, above which the probability of a random graph having the property we want is greater than 0. It is NOT proven that the graph we're looking for has exactly n vertices! It's only proven that random graphs of at least n nodes have a positive probability to have the property. The reason the proof works is because there are infinite numbers larger than n! It shows that there are infinite graphs that have non-zero probability to have the property we're looking for, and therefore at least one must actually exist.

              Here's a list. Your incorrect claim is counter to #3.

              1. Infinite set, nonzero probability: we know at least one example must exist.

              2. Infinite set, zero probability: not sure! It could be zero, or any subset of smaller cardinality or measure zero.

              3. Finite set, nonzero probability: not sure! It could be zero! My coin flip example shows this.

              4. Finite set, zero probability: we know no examples can exist.

              • bubblyworld a day ago |
                Look, I'm not going to argue about this because we're talking about completely different things. Read up on the probabilistic method if you're genuinely interested in what I wrote - Paul Erdos has some famous papers on it.
                • feoren a day ago |
                  We're not talking about different things. We're talking about one very specific claim that you made:

                  > if something exists with non-zero probability in a finite set there has to be at least one example

                  This is a false claim. See my #1 through #4 above. I strongly suspect if you carefully re-read those Erdos papers, you'll discover that he never relies on my #2 or #3 above to prove anything; only #1 and #4.

                  • bubblyworld a day ago |
                    If a property P holds over a sample space with positive probability (i.e. the measure of the subset of elements satisfying P is positive), this implies that there is at least one element of the sample space satisfying P. This is easy to prove - it follows formally from sigma-additivity of the measure.

                    I think you are confusing this with the notion of taking repeated samples from a probability distribution (which is what your coin example is all about).

                    • feoren a day ago |
                      > i.e. the measure of the subset of elements satisfying P is positive

                      Question: do any finite sets ever have positive (nonzero) measure? If not, then your sample space is infinite and you're in #4. Haven't you just restricted our conversation only to infinite sets via this assumption? Remember that this entire discussion started because of your claim about finite sets!

                      • bubblyworld a day ago |
                        That depends on the measure, obviously. It's trivial to construct examples in which finite sets have positive measure - just take the uniform probability distribution over a finite set. Or for a more thematic example, the probability space of random graphs on n vertices. Or really any finite measure you like!

                        There are also myriad examples of measures over infinite spaces in which there exist finite sets with positive measure - left as an exercise to the reader.

                        I don't mean this to be rude, but I think you would benefit from reading an undergraduate level textbook on probability theory (or measure theory). Especially before making confident claims about people like Erdos are wielding it.

                        • feoren 11 hours ago |
                          I really, really think you are misunderstanding what makes Erdős's proof work in the second example in your Wikipedia link. It really has nothing to do with measure, and relies entirely on showing that there is an infinite set of graphs that all have probability at least 1/2 to have two properties. The proof simply would not work if all of those conditions were not there (I think we're both confident that Erdős did not include superfluous conditions in his proofs!) "Measure" is not mentioned at all.

                          I obviously don't know the intricate details of every possible measure that could be defined, but I strongly suspect that no matter what measure you pick, the following statement is utterly tautological: "If the set { x ∈ S | P(x) } has measure > 0, then there must exist some y in S s.t. P(y)". You're really just saying: if we can show that the subset of elements in S with property P contains at least one element, then S must also contain that element. Of course! It's a subset of S! It's tautological, and I don't believe Erdős threw in tautological steps in his proofs. You are misunderstanding what the proof step is.

                          We're talking about the Second example in this link: https://en.wikipedia.org/wiki/Probabilistic_method

                          It's structured like this: "Let n be very large and consider a random graph G on n vertices ... we show that with positive probability, G satisfies two properties." Let me ask: how are we mathematically certain there exists a graph G that satisfies the two properties? (And we are; the proof is correct.) Which of the 4 situations are we in? Your focus on finite sets makes it sound like we're in #3: finite set, nonzero probability. And indeed the set of random graphs with n vertices is finite! But that is an important mistake. We are not relying on argument #3. Yes, the set of random graphs with n vertices is finite, but we have NOT proven that any specific G has our two properties, and we have NOT proven that any graph with n vertices has those two properties! It's entirely possible that NO graphs with n vertices have the two properties; all we've shown is that every graph in a finite set has some positive probability to have our properties.

                          You might encounter a similar proof, exactly the same up to this point, and that proof might be wrong! So why does Erdős's proof work? It relies on two things that are both necessary: (1) we're not talking about a finite set at all! We're talking about random subgraphs of size n OR LARGER. We're actually picking from an infinite set here. Our smallest example might be of size 10^10^10^n! We don't know! We just know that if we keep going, eventually we must hit one. And, critically, (2) there is a positive lower bound on the probability, above some n (it's 1/2). That is, if the probability of a random subgraph having those two properties was some small nonzero probability that decreased as n increased, e.g. p ~ 1/n^2, then we still might never find an example! The proof could be incorrect! But it's shown that above some n, the probability is at least 1/2. Thus the whole proof works, because 1 - (1/2)^inf = 1. The sum of existence probabilities over all possible examples (infinite) is 1. QED.

                          So in summary, this statement:

                          > If a property P holds over a sample space with positive probability (i.e. the measure of the subset of elements satisfying P is positive), this implies that there is at least one element of the sample space satisfying P.

                          is either false or tautological.

                          If "positive probability" means proving that some elements in a finite sample space have a positive probability of satisfying P, you have not proven existence. You need to show that the sum of "existence probabilities" of all your examples is 1. Showing a positive probability on an infinite set does this (unless the sum of probabilities over all infinite examples somehow converges to a number < 1, e.g. if p ~ 1/n^2 above).

                          If "positive measure" means "we've proven such a thing exists", then it's tautological. All you've said is "if you can prove something exists in a subset of S, then it also exists in S." I guarantee you Erdős did not include such unnecessary tautologies in his proofs.

          • feoren 2 days ago |
            You're simply wrong. What I said was correct.

            > if there is some finite set and there is a non-zero probability of something existing in it then there must be at least one _possible_ example of how this can occur

            That is not what GP claimed nor does it really make sense. Yes, if there is a non-zero probability of something existing, then it must be possible. That is what non-zero probability means. That has nothing to do with sets. You've just added the word "possible" to make the claim vague enough to be not provably incorrect.

            If you have a finite set, and you show that members of that set have a non-zero (but less than 1) probability of having some property, you do not know whether a member of the set actually has that property! Period! Why is this a point of contention?

            > a frequentist approach would give the probability of something that occured 50 times out of \inf as 0 while other approaches may say a small non-0 probability.

            No. Nobody would say that 50/infinity is greater than 0.

            You haven't actually responded to the thing I actually said, nor really to GP's claim. The claim was:

            >> if something exists with non-zero probability in a finite set there has to be at least one example

            That is false. I gave an example of how it's false.

            • moralestapia a day ago |
              Agree, @zamadatix seems a bit out if his area of expertise.

              On finite sets, all the stuff they're saying doesn't make sense completely.

              • bubblyworld a day ago |
                I'm an ex-PhD in mathematics. We're talking about two entirely separate things - finite numbers of samples from a probability distribution (what you are talking about) vs the measure of a subset of some finite number of objects (the probabilistic method).
              • zamadatix a day ago |
                Just trying to prevent my "false assumption" confirmation bias from growing - did you come to this thread as a result of https://news.ycombinator.com/item?id=41729586 or should I be attributing happenstance? Either is cool, no worries!
            • zamadatix a day ago |
              > That is not what GP claimed nor does it really make sense. Yes, if there is a non-zero probability of something existing, then it must be possible. That is what non-zero probability means. That has nothing to do with sets. You've just added the word "possible" to make the claim vague enough to be not provably incorrect.

              I'm not here to assert what GP meant with certainty (and neither should you?). Nor am I going to bother engaging further when you so blatantly end with how your troubles with the vagueness in your understanding of the what the claim meant lie in being unable to call it incorrect.

              > No. Nobody would say that 50/infinity is greater than 0.

              Nor did I claim that's how the other approaches come to calculate the probability greater than 0... again, what's the point in me engaging on this if your goal is to slap portions of text together until you find something to disagree with instead of actually engaging with me? A complaint about me not including how the alternative approaches come to a different conclusion would be more than a valid criticism (or even friendly probe, were you actually interested) but you're goal to make each response to anyone here start with "no", "incorrect", or "you're wrong" is apparently too driving to allow that kind of discussion.

              > You haven't actually responded to the thing I actually said, nor really to GP's claim. The claim was... That is false. I gave an example of how it's false.

              This, ironically, has got to be the most perfect way I've seen to skip actually responding to anything someone else has said and return to one's original assertions in one swoop. Especially combined with the response opening "You're simply wrong. What I said was correct"...

              All that aside, it'd be a disservice to myself to say I'm not going to engage as a "last word" situation so I promise to come back to read any response (if one is made) and see what takeaways you have for me. My only ask is it be more than just how you're right/GP is so wrong/I'm wrong - at least something else about how my initial interjection/the GP's original statement (as you understand it from the other side thread now) could have been better, a more rigorous definition (i.e. not just words we've all been saying) of what you've been arguing so I can say "yeah, I did really understand what he was trying to say was so" (even if I disagree that was the ultimate point being made), or the like. Or don't, it's ultimately a service for me - you won't necessarily get anything out of it. Thanks

              • feoren 11 hours ago |
                Look, all I can do is respond to the words as actually written. Someone made a claim and I countered it. Now you appear to be simultaneously complaining that I'm responding to exact quoted text (which I "slapped together" by directly quoting?) and also that I'm somehow also not responding to you because after my direct response to the words you wrote, I tried to re-focus the discussion on the original claim? I didn't skip actually responding to you, I did respond to you (50/infinity = 0); you just didn't like my response.

                My takeaway for you is this: we're talking about mathematical proof techniques, and what you can conclude from the four different situations below. The soundness of a mathematical proof is independent of whether you're a frequentist or not, which is why I brushed off that comment. The particular measure you choose doesn't really matter either. It might matter whether you accept the axiom of choice or not, but I'm the only one who's brought that up so far. It's important because mathematical proofs are very finicky on exact details, and bubblyworld's statement is exactly the kind of pernicious little detail that can trick you into thinking a proof is correct when it's not. That matters!

                The main free variable here is exactly what "probability" means in terms of how you're "picking" possible objects. Yes, there are some cases where showing that an object exists with "positive probability" (and almost certainly a subset having "positive measure") means that one example must actually exist, but if that's the case, the jump from "positive measure" to "must exist" is actually tautological! That is, your proof doesn't need that step; the act of showing "positive measure" has already proven the existence. The very fact that a mathematical pillar like Erdős was relying on such a step means that it was necessary for his proof. His proofs didn't have random tautological steps in them. We are not talking about the proof technique of "if a subset of X has a thing in it, then X must have a thing in it" -- that's literally tautological.

                So what does "probability" really mean here? bubblyworld gave a link to examples of what he's talking about (see [1], Example 2), and seemed to misunderstand the proof. It's structured like this: "Let n be very large and consider a random graph G on n vertices ... we show that with positive probability, G satisfies two properties." Let me ask: how are we mathematically certain there exists a graph G that satisfies the two properties? (And we are; the proof is correct.) Which of the 4 situations are we in below? bubblyworld's insistence on finite sets makes it sound like we're in #3: finite set, nonzero probability. And indeed the set of random graphs with n vertices is finite! But that is an important mistake. We are not relying on argument #3 below. Yes, the set of random graphs with n vertices is finite, but we have NOT proven that any specific G has our two properties, and we have NOT proven that any graph with n vertices has those two properties! It's entirely possible that NO graphs with n vertices have the two properties; all we've shown is that every graph in a finite set has some positive probability to have our properties.

                You might encounter a similar proof, exactly the same up to this point, and that proof might be wrong! If you believe bubblyworld, you could make this mistake. So why does Erdős's proof work? It relies on two things that are both necessary: (1) we're not talking about a finite set at all! We're talking about random subgraphs of size n OR LARGER. We're actually picking from an infinite set here. Our smallest example might be of size 10^10^10^n! We don't know! We just know that if we keep going, eventually we must hit one. And, critically, (2) there is a positive lower bound on the probability, above some n (it's 1/2). That is, if the probability of a random subgraph having those two properties was some small nonzero probability that decreased as n increased, e.g. p ~ 1/n^2, then we still might never find an example! The proof could be incorrect! But it's shown that above some n, the probability is at least 1/2. Thus the whole proof works.

                Any discussion about frequentism or Lebesgue measure is missing the point: we're talking about whether proofs are correct or not, and that is a very finicky, tricky discussion. bubblyworld's original statement, and his continued refinements of that statement, are incorrect, and believing such things can lead you to accept incorrect proofs. I think that matters. So I corrected it.

                [1] https://en.wikipedia.org/wiki/Probabilistic_method

                1. Infinite set, nonzero probability: we know at least one example must exist.

                2. Infinite set, zero probability: not sure! It could be zero, or any subset of smaller cardinality or measure zero.

                3. Finite set, nonzero probability: not sure! It could be zero! My coin flip example shows this.

                4. Finite set, zero probability: we know no examples can exist.

                • zamadatix 8 hours ago |
                  I know I said I wouldn't say anything but I really wanted to thank you for taking the time for an incredibly well laid out response and discussion. This was indeed helpful on a few fronts for me. I've still gotta percolate a few thoughts before I'd say anything for certain but this did clarify quite a bit of the discussion already. Like you say - words are a very fuzzy way to discuss a specific point of math :). Thanks again.

                  Apologies if my initial wording or response made it difficult to reach this kind of discussion and much appreciation for setting that aside when you wrote this for me.

  • IshKebab 2 days ago |
    This post could really do with a couple of diagrams of what these graphs are. I think I got it from the description but I'm not sure...
  • willcipriano 2 days ago |
    It does not in fact give you extra space in your room to do activities.
  • tsimionescu 2 days ago |
    This is an interesting case of a conjecture holding true for small objects but breaking down for huge ones, without specifically adding that size in somehow.

    Sometimes we tend to have this intuition that if a rule applies to all low numbers, than it must apply to all numbers, that there can't be some huge number after which it breaks down (unless of course it explicitly includes that huge number, such as the rule "all numbers are smaller than a billion billion billion").

    This is such a powerful intuition, even though it's obviously wrong, that rules that break it are even seen as a problem. For example there is the so-called "hierarchy problem" in physics, which states something like "there is no reason why the weak force is so much stronger than gravity". As if 2 times as strong is somehow fundamentally different than it being 10^24 times as strong from a mathematical perspective. And this has ended up being called a major problem with the standard model, even though it's completely normal from a mathematical perspective.

    • throwawayiionqz 2 days ago |
      Many examples of conjectures true for all n, until a very large one is eventually found : https://mathoverflow.net/questions/15444/examples-of-eventua...
    • snarkconjecture 2 days ago |
      There are two kinds of naturalness principle in physics, sometimes called "technical naturalness" and "Dirac naturalness" respectively.

      Dirac naturalness is as you describe: skepticism towards extremely large numbers, end of story. It has the flaw you (and every other person who's ever heard it) point out.

      Technical (or t'Hooft) naturalness is different, and specific to quantum field theory.

      To cut a long story short, the "effective", observable parameters of the Standard Model, such as the mass of the electron, are really the sums of enormous numbers of contributions from different processes happening in quantum superposition. (Keywords: Feynman diagram, renormalization, effective field theory.) The underlying, "bare" parameters each end up affecting many different observables. You can think of this as a big machine with N knobs and N dials, but where each dial is sensitive to each knob in a complicated way.

      Technical naturalness states: the sum of the contributions to e.g. the Higgs boson mass should not be many orders of magnitude smaller than each individual contribution, without good reason.

      The Higgs mass that we observe is not technically natural. As far as we can tell, thousands of different effects due to unrelated processes are all cancelling out to dozens of decimal places, for no reason anyone can discern. There's a dial at 0.000000.., and turning any knob by a tiny bit would put it at 3 or -2 or something.

      There are still critiques to be made here. Maybe the "underlying" parameters aren't really the only fundamental ones, and somehow the effective ones are also fundamental? Maybe there's some reason things cancel out, which we just haven't done the right math to discover? Maybe there's new physics beyond the SM (as we know there eventually has to be)?

      But overall it's a situation that, imo, demands an explanation beyond "eh, sometimes numbers are big". If you want to say that physical calculations "explain" anything -- if, for example, you think electromagnetism and thermodynamics can "explain" the approximate light output of a 100W bulb -- then you should care about this.

    • ants_everywhere 2 days ago |
      One way to think of this is that smaller numbers are more constrained in a sense. As numbers get bigger more possibilities arise.

      For example, in lower dimensions you get accidental isomorphisms [0]. These exist essentially because the defining equations are more constrained in lower dimensions.

      A more tongue in cheek intuition is the interesting numbers theorem, which says essentially that every integer is interesting. One of the things that can make a number interesting is that there's a property for which it's the first counterexample.

      [0] https://en.wikipedia.org/wiki/Exceptional_isomorphism

  • dudeinjapan 2 days ago |
    I already learned this from Step Brothers: https://www.youtube.com/watch?v=ulwUkaKjgY0
  • ykonstant 2 days ago |
    That does seem extremely counter-intuitive; can the counterexample be adapted to provide a sequence of counterexamples with unbounded number of edges?
    • r0uv3n 2 days ago |
      I guess you can just add some edges extra vertices and edges and give all the extra edges probability 1 of being deleted.
  • User3456335 2 days ago |
    The paper seems to have a definition where bed posts are never deleted, i.e. they are all assigned probability 1 in which case the conjecture is obviously true.

    The counterexample seems to rely on correlations between edge deletions which makes no sense because the deletions should be independent (in the definition I'm seeing on Wikipedia).

    I could be wrong here because I haven't read it in detail but on first sight, it looks like there are some serious issues with mathematical rigour here.

    • JohnKemeny 2 days ago |
      You can probably replace any transversal vertex with a large clique (depending only on p) such that the probability that the clique remains connected and at least one post remains.

      (I.e., you can "model" undeletability by adding sufficiently many twin posts.)

      Ps., do you mean "in which case the conjecture is obviously false"?

      • User3456335 2 days ago |
        Surely, if the poles can't be deleted then which node is chosen from a bunkbed will not affect the connectedness probability, which would make it impossible to find a counterexample, right? What am I missing here?

        PS Depends on what conjecture we refer to here.