Intuitively nice in a sense but I honestly think '0' is misrepresenting what is going on here. I'm ok with it being ' "+ and/or -" infinity' as a new definition.
Programmatically I think it should result in a NULL or VOID or similar. I mean, by definition it has no definition.
KARIM ANI: If you take 10 and divide it by 10, you get one. 10 divided by five is two. 10 divided by half is 20. The smaller the number on the bottom, the number that you're dividing by, the larger the result. And so by that reasoning ...
LULU: If you divide by zero, the smallest nothingness number we can conceive of, then your answer ...
KARIM ANI: Would be infinity.
LULU: Why isn't it infinity? Infinity feels like a great answer.
KARIM ANI: Because infinity in mathematics isn't actually a number, it's a direction. It's a direction that we can move towards, but it isn't a destination that we can get to. And the reason is because if you allow for infinity then you get really weird results. For instance, infinity plus zero is ...
LATIF: Infinity.
KARIM ANI: Infinity plus two is infinity. Infinity plus three is infinity. And what that would suggest is zero is equal to one, is equal to two, is equal to three, is equal to four ...
STEVE STROGATZ: And that would break math as we know it. Because then, as your friend says, all numbers would become the same number.
[0] https://radiolab.org/podcast/zeroworld > 1 / 0
Infinity
> 1 / -0
-Infinity
> 0 === -0
true
> Object.is(0, -0)
false
In a language like C or Rust, you can cast your +0.0 and -0.0 to an integer, and print out the bit pattern. They are different.
>>> np.float64(-1.)/0.
-inf
>>> np.float64(1.)/0.
inf
And you're exactly right, 0/0 is NaN in 754 math exactly because it approaches negative infinity, zero (from 0/x), and positive infinity at the same time.`1/0` and `1/0 + 1` aren't meaningfully different, so it kinda does make sense for whatever notation to not make a distinction.
[ok] 1. Infinity + 1 == Infinity + 2
[ok] 2. Infinity + 1 - Infinity == Infinity + 2 - Infinity
[wrong] 3a. 1 == 2 (assumes Infinity - Infinity == 0, which is false)
[ok] 3b. Infinity == Infinity
So starting from Infinity + 1 == Infinity + 2 gets you nowhere interesting.
And that quote is a great example of what I hate about every pop-sci treatment of mathematics:
> Because infinity in mathematics isn't actually a number, it's a direction
Any time someone says "actually, in mathematics, ..." they're talking out of their ass. No matter what comes after, there is a different system of math that makes their statement false. There are plenty of branches of mathematics that are perfectly happy with infinity being a "number", not a "direction". What even is a "number" anyway?
Infinity is not a real number.
> > If 0/0 = 0 then lim_(x -> 0) sin(x) / x = sin(0) / 0 = 0, but by L’Hospitals’ Rule lim_(x -> 0) sin(x) / x = lim_(x -> 0) cos(x) / 1 = 1. So we have 0 = 1.
> This was a really clever one. The issue is that the counterargument assumes that if the limit exists and f(0) is defined, then lim_(x -> 0) f(x) = f(0). This isn’t always true: take a continuous function and add a point discontinuity. The limit of sin(x) / x is not sin(0) / 0, because sin(x) / x is discontinuous at 0. For the unextended division it’s because sin(0) / 0 is undefined, while for our extended division it’s a point discontinuity. Funnily enough if we instead picked x/0 = 1 then sin(x) / x would be continuous everywhere.
Similar examples can be constructed for any regular function which is discontinuous (e.g. Heaviside step function).
explains Lean's behavior. Basically, you use a goofy alternate definition of division (and sqrt, and more), and to compensate you have to assume (or prove based on assumptions) that the things you will divide by are never zero.
Hillel's pedantry is ill-taken, though, because he starts off with a false accusation that the headline tweet was insulting anyone.
Also, 1/0=0" is sound only if you change the field axiom.of division, which is fine, but quite rather hiding the ball. If you add " 1/0=0" as an axiom to the usual field axioms, you do get an unsound system.
Just because it’s formally consistent doesn’t mean it isn’t dumb.
"Dumb" is purely a matter of aesthetic preference. Calling things "dumb" is dumb.
> Normally, when you divide by a small number, you get a large number. Now for some reason it goes through zero.
Zero is not a "small" number. Zero is the zero number. There is no number that is better result than 0 when dividing by 0; "Infinity" is not a real (or complex) number. This itself is GREAT reason to set 1/0 = 0. It only ever bothers people who conflate open sets with closed sets, or conflate Infinity with real numbers, so it's good have this pop up to force people to think about the difference.
What do you mean by this? Zero is certainly a zero number, but it seems that it might also be a small number simultaneously.
Sum[1/x - 1/(x+1), {x, 1, ∞}] == 1
You do actually need infinity to arrive at that 1.Or try it the other way, tell me what mathematics works better if 1/x=0 than 1/x=5. If there’s an aesthetic preference displayed here, it’s for mathematics as a tool for reasoning.
If so, how weirdly arbitrary that the additive zero is omitted for all multiplicative inverse definitions. (At least it seems to me). I always figured this was a consequence of our number systems, not of all fields.
What is your, uh, definition of this undefined* number you are familiar with?
What is "0"? It's not defined in the axioms other than additive zero. Or is it multiplicative zero? (1?). Is it the number zero?
If it is the additive zero defined in axiom (3), then it just seems weird to me that additive zero is undefined for multiplicative inverse for all fields always and forever.
If it is the number zero, then how does that generalize to other fields?
If the answer is "Numbers are the first field and all fields generalize that", then I suppose we are referring to the number (0), and that's fine, as other fields are welcome to define their own larger definition of zero that includes the number (0) ... ?
It's not the "number zero" because a field does not care about numbers, it's just elements of a set (which might be numbers like in R's case).
1 is not "multiplicative zero", it's the "multiplicative identity".
0 and 1 are just the shorthand we give for those elements. because those are the symbols we use in R which is the most common field we deal with in everyday life.
Or am I misunderstanding your question?
The reason the additive identity cannot have a multiplicative inverse is likewise fairly straightforward: once again using `a` as our additive identity we have y.(x+a) = y.x for all x, y in our field; distributing on the LHS gives y.x + y.a = y.x for all x, y in our field; subtracting y.x from both sides finally gives us y.a = 0 for all y in our field.
You would need to relax one or more of the field axioms to have a structure in which the additive identity can have a multiplicative inverse. I'm not aware of any algebraic structure of particular interest that would allow a multiplicative inverse of the additive identity, but in general if you're interested in reading more on this sort of thing I'd recommend reading about rings, commutative rings, and division algebras.
If they ignore it, I do not care, it is the business problem anyway.
Worked for me for decades :)
At the end of the day, the / that we have in programming has the same problem as this article's /, almost all programming languages will return 5/2 = 2 when dividing integers, even though 2 * 2 is not 5! Division is not defined for all integers, but it's just convenient to extend it when programming.
So if some languages want to define 1/0 = 0, we really shouldn't be surprised that 0*0 is not 1, we already had the (a/b)*b != a problem all along!
This is not generally true. 5/2 = 2, 50/20 = 2, 500/200 = 2, and so on no matter how big the numbers get.
print(math.MinInt / -1)
https://go.dev/play/p/Vy1kj0dEsqPReusing symbols in a different context is pretty common; taking a symbol that is already broadly used in a specific way (in this case, that `a/b` is defined for elements in a field as multiplying `a` by the multiplicative inverse of `b`) is poor form and, frankly, a disingenuous argument.
The standard example is that we have a well-defined and useful notion of division in the ring Z/nZ for n any positive integer even in cases were we "divide" by an element that has no multiplicative inverse. Easy example: take n=8 then you can "divide" 4+nZ by 2+nZ just fine (and in fact turn Z/nZ into a Euclidean ring), even though 2+nZ is not a unit, i.e. admits no multiplicative inverse.
It’s all just definitions. Always has been.
IIUC, codeflo is arguing that the division operation defined in the article isn't "actual division" because (a/b)*b=a isn't true for all values. But I can't think of a definition of division that satisfies that criteria.
The parallel in programming would be the contract : you provide a function that works on a given set of values. Or the type: the function would "crash" if you passed a value not of the type of its parameter, but it is admitted it won't be done.
(In the remaining I'm referring to 1/x instead of a/b to simplify things a bit)
Another way of saying it is that the function is undefined for 0. (Or on {0}). Then the property is true for all values (on which the function is defined, but saying it is redundant, the function can't be called outside its domain, it is an error to try to do this).
The domain is often left out / implicit, but it is always part of the definition of a function.
0 is not in the domain, so it's not to be considered at all when studying the function (except maybe when studying limits, but the function will still not be called with it).
Also just to point out, the statement here really is a*b‾*b=a, which might make it more clear why b≠0.
maybe someday there will be a revelation where somebody proposes that it's a new class of numbers we've never considered before like how (1-1), (0-1) and sqrt(-1) used to be nonsensical values to past mathematicians. For now it's not defined.
In modern math, the concept of a field establishes addition and multiplication within its structure. We are not free to redefine those without abandoning a boatload of things that depend on their definition.
Division is not inherent to field theory, but rather an operation defined by convention.
It seems like you're fixating on the most common convention, but as Hilel points out, there is no reason we have to adopt this convention in all situations.
It's true that it's not defined for integer types, but that wouldn't make a = b*(a/b) true for them either.
It's also common to define x/0 = infinity in the extended real numbers that floating point models.
>>> import random
>>> random.random()
0.4667867537470992
>>> n = 0
>>> for i in range(1_000_000):
... a = random.random()
... b = random.random()
... if (a == b * (a / b)):
... n += 1
...
>>> n
886304
For example: >>> a, b = 0.7959754927336106, 0.7345016612407793
>>> a == b * (a / b)
False
>>> a
0.7959754927336106
>>> b * (a / b)
0.7959754927336105
This is off by one ulp ("unit in the last place").And of course the division of two finite floating point numbers may be infinite:
>>> a, b = 2, 1.5e-323
>>> a
2
>>> b
1.5e-323
>>> b * (a / b)
inf
>>> a/b
inf
As a minor technical point, x/0 can be -INF if sgn(x) < 0, and NaN if x is a NaN.For a good example of why this needs to be undefined, consider that limit as b approaches zero of a/b is both +INF and -INF depending on whether b is "approaching" from the side that matches a's sign or the opposite side. At the exact singularity where b=0 +INF and -INF are both equally valid answers, which is a contradiction.
also in case you weren't aware, "NaN" stands for "not a number".
In the extended reals case I mentioned, it's a definition used when working on the positives. Didn't think I needed to state the obvious.
Pff. The author wants to show off their knowledge of fields by defining a "division" operator where 1/0 = 0. Absolutely fine. I could define "addition" where 1 + 2 = 7. Totally fine.
What I can't do is write a programming language where I use the universally recognised "+" symbols for this operation, call it "addition" and claim that it's totally reasonable.
Under the standard definition of division implied by '/' it is mathematically wrong.
What they obviously should have done is use a different symbol, say `/!`. Obviously now they've done the classic thing and made the obvious choice unsafe and the safe choice unobvious (`/?`).
As a programmer, you're right: we have standard expectations around how computers do mathematics.
As a pedant: Why not? Commonly considered 'reasonable' things surrounding addition in programming languages are:
* (Particularly for older programming languages): If we let Z = X + Y, where X > 0 and Y > 0, any of the following can be true: Z < X, Z < Y, (Z - X) < Y. Which we commonly know as 'wrap around'.
* I haven't yet encountered a language which solves this issue: X + Y has no result for sufficiently large values for X and Y (any integer whose binary representation exceeds the storage capacity of the machine the code runs on will do). Depending on whether or not the language supports integer promotion and arbitrary precision integers the values of X and Y don't even have to be particularly large.
* Non-integer addition. You're lucky if 0.3 = 0.1 + 0.2, good luck trying to to get anything sensible out of X + 0.2, where X = (2 ^ 128) + 0.1.
Well, Python supports arbitrary precision integers. And some other niche languages (Sail is one I know).
I don't think "running out of memory" counts as a caveat because it still won't give the wrong answer.
For floats, I don't think it's actually unreasonable to use different operators there. I vaguely recall some languages use +. or .+ or something for float addition.
Fair point about wrapping.
As a Lisper, I very carefully chose an example to account for arbitrary-precision integers (so X + X where X is, say, 8^8^8^8 (remember, exponentiation is right-associative, 8^8^8^8 = 8^(8^(8^8)))).
> I don't think "running out of memory" counts as a caveat because it still won't give the wrong answer.
Being pedantic, it doesn't give the _correct_ answer either, because in mathematics 'ran out of memory' is not the correct answer for any addition.
The best you do is "not the wrong answer".
In retrospect, I see his point better - practical use trumps theory in most language design decisions.
I haven't changed my mind but the reason has shifted more toward because "it's what a larger set of people expect in more situations" rather than mathematical purity.
It does not, because it is not. And the “real mathematicians” that he quotes aren’t supporting his case either, they’re just saying that there are cases where it’s convenient to pretend. If you look at the Wikipedia page for division by zero you may find “it is possible to define the result of division by zero in other ways, resulting in different number systems”: in short, if it’s convenient, you can make up your own rules.
Yes.
People find it confusing that there is no simple model that encapsulates arithmetic. Fields do not capture it in its entirety. The models of arithmetic that describe it end up being extremely complex.
Arithmetic is ubiquitous in proofs of other things, and people like the author of this blog cannot get over it.
Reality is weird, inconsistent, and weirdly incomplete.
Get used to it!
We don’t make up arbitrary rules, though. Well…so-called mathematicians who study systems with completely arbitrary rules are just jerking off. The rules that most mathematicians use are based on our intuitions about what can’t be proven but “has to be” true.
Which would be infinite, since ghosts occupy no space and can't interact with physical reality.
As a proportion, compared to nonexistence, any quantity of something is infinitely greater than nothing, so if not n/0, how would you express you expect not the absence of a thing, but its nonexistence?
That's an interesting solution...
If I have 5 apples and divide them in to 0 buckets of apples, that makes sense. If I have 5 apples and divide them into 0 buckets of tractor; that doesn't make sense.
It's not at all a good idea for very important practical reasons as I outline in a reply to parent.
> const f = (x) => [x/2, x/0]
undefined
> f(10)
[ 5, Infinity ]
Go solves this really badly.
As for specifically what to do about division: the right default depends on your application. Either way is defensible, and I would rather work on making it easy to pick either style in the language of your choice, than to worry too much about what the default should be.
You can have one or the other.
You can't have both without the risk of nasal demons. Unless the result of the operation is business-safe to throw away.
That's why having the default / have both is an poor design choice by gleam and pony. Someone will reach for / and encounter demons. Afaict the other langs that do this are not intended for real world prod use. By default / should force the developer into either crashable or unwrap error return. If you want some sort of opt-in "logic-unsafe /", fine but call it something else like </> e.g.
Some people get scarred working in other langs and can't let go, I guess?
https://en.wikipedia.org/wiki/Projectively_extended_real_lin...
If anything it feels natural to yield +/-infinity
Most languages throw an error instead, but there are tradeoffs there too. If you've decided not to throw an error you should at least return a usable number and zero makes more sense than -1 or 7 or a billion or whatever.
You could also build the number stack from the ground up to accommodate this edge case, and make it so all arithmetic functions can handle infinities, infinitesimals and limits. I've come across a racket sublang like that but it's nearly unusable for the normal common things you want to do with numbers in code.
I don't think so, because getting 0 in a larger expression might yield a result that looks plausible, leading to hidden bugs. Inf and NaN both are good because they necessarily propagate all the way up to the end result, making it obvious that something went wrong.
But those are cases where the larger a value is, the less is contributes to the final value.
So I saw this in action once, and it created a mess. Private company had a stupid stock dividend mechanism: every shareholder received some fraction, dependent on fundraising, of a recurring floating pool of shares, quantity dependent on operating performance. (TL; DR Capital was supposed to fundraise, management was supposed to operate. It was stupid.)
One quarter, the divisor was zero for reasons I can't remember. This should have resulted in no stock dividend. Instead, the cap-table manager issued zero-share certificates to everyone. By Murphy's Law, this occured on the last quarter of the company's fiscal year.
Zero-share certificates are used for one purpose: to help a shareholder prove to an authority that they no longer own any shares. Unlike normal share certificates, which are additive, a zero-share certificate doesn't add zero shares to your existing shares; it ambiguously negates them. In essence, on that day, the cap-table manager sent every shareholder a notice that looked like their shares had been cancelled. Because their system thought 1 / 0 = 0.
If you're dividing by zero in a low-impact system, it really doesn't matter what you output. Zero. Infinity. Bagel. If you're doing so in a physical or financial or other high-impact system, the appropriate output is confused puppy.
But it's quite a nice way to mask program bugs.
There is only one correct behavior for something named "int". Give the correct result or throw an error.
If you have a type named "int" with an operation called "addition", and that operation is not actually integer addition... it's wrong.
I tried once to investigate the implications, but it quickly became far more complex that with 'i' and never went far. Still intrigued if this is somewhat interesting or a total time loss though.
Some languages will wrap division by zero in a special type, a NaN (not a number). You can then reason on top if that NaN if you want to.
So, in a sense, there are some people already doing practical stuff with substituting /0 for a new symbol.
4) The dyadic arithmetic operators <plus sign>, <minus sign>, <as-
terisk>, and <solidus> (+, -, *, and /, respectively) specify
addition, subtraction, multiplication, and division, respec-
tively. If the value of a divisor is zero, then an exception
condition is raised: data exception-division by zero.
However, the "any operation involving NULL yields NULL" is standard: 1) If the value of any <numeric primary> simply contained in a
<numeric value expression> is the null value, then the result of
the <numeric value expression> is the null value.
https://www.contrib.andrew.cmu.edu/~shadow/sql/sql1992.txtSo, dividing by NULL is allowed and yields NULL. Dividing by zero yielding NULL is non-standard (I used it though).
So one grain of sand is a heap and then when you remove that grain the heap disappears, but you only removed one grain from a heap so this is impossible because it is discontinuous. One solution is to wrap the problem in fuzzy logic with a 'heapness' measure.
Generalizing this type of solution we have a practice of wrapping paradoxes in other forms of logic. You would define an interface between these logics. For example in Trits (0,1,UNKNOWN) you could define an interface where you can change the type of NOT-UNKNOWN from Trit to Boolean. This would return at least some part of the entropy to the original domain, preserving a continuity. Wave Function Collapse is another example of translating from one logical domain to another.
non-zero / 0 = No number
"SQL dialects typically return NULL for erroneous operations." I disagree with this. NULL does not mean erroneous, it simply means the definition is not yet known and therefore cannot be discussed beyond saying you don't know. That could be erroneous, but you don't know yet, all you have is a NULL.
If it's any comfort, I do agree that NULL is better than 0 or some other non-null result. I just don't think it's best and clouds the nature of the expression, the inputs to the expression, and ultimately is an incorrect result.
Also to be fair, MySQL had many more grievous foot-gun data quality issues in the past than this... though these things certainly did make it easier for a non-expert database developer to get something working without blow-up-everything errors.
SQL returns NULL if any input value into an expression is NULL, not if an invalid operation is attempted. If the expression contains an error, SQL throws an error, it doesn't return NULL.
The SQL standard requires to error out in this case.
Also: I don't know of any system that would not result in an error when you try to divide something by zero.
Trying to calculate... I don't know, how many 2-disk raid6 groups I need to hold some amount of data is an error, not "lol you don't need any".
If my queue consumer can handle 0 concurrent tasks, it will take literally forever to finish, not finish instantly.
Additionally, if inverses are defined as separate objects then what is 2 plus the inverse of 2? It doesn't simplify to 2.5 because there's no addition axiom for numbers and multiplicative inverses, or for that matter any rules for inverses with inverses. So you might have 1/2 and 5/10 but they're not equal and can't be multiplied together.
Making wrong definitions creates contradictions. With 1*x=x, ∞/∞=1, the associative property x*(y/z)=(x*y)/z, and ∞*∞=∞:
∞ = ∞*1 = ∞*(∞/∞) = (∞*∞)/∞ = ∞/∞ = 1
However, you get cool new results, like x/x=1+(0/x). Definitely some upsides.
If we add new numbers like ∞, -∞, and NaN (as the neighbor comment suggests with IEEE754-like arithmetic), now x/x=1 requires x≠0, x≠∞, x≠-∞, and x≠NaN. Adding more conditions changes the multiplicative inverse field axiom, and thus doesn't extend field theory. Also, now x*0=0 requires x≠∞, x≠-∞, and x≠NaN. What a mess.
I’m not suggesting that we add numbers or change the definition from undefined. I think undefined is a more accurate description of x/0, because x/0 is clearly far greater than 0.
No, just look at the graph of f(x) = 1/x. +inf can't work.
It can work if you assume that no numbers are ever negative.
1 / 0 = 0 (2018) - https://news.ycombinator.com/item?id=42167875 - Nov 2024 (8 comments)
What is the best answer to divide by 0 - https://news.ycombinator.com/item?id=40210775 - April 2024 (3 comments)
1/0 = 0 - https://news.ycombinator.com/item?id=17736046 - Aug 2018 (570 comments)
IMO, whether something like this makes sense is a separate matter. Personally I always just think of division in terms of multiplicative inverses, so I don't see how defining division by zero helps other than perhaps making implementation easier in a proof assistant. But I've seen people say that there are some cases where having a/0 = 0 works out nicely. I'm curious to know what these cases are, though.
You have a field (a set of "numbers"). Multiplication is defined over the field. You want to invent a notion of division. Let's introduce the notation "a/b" to refer to some member of a field such that "a/b" * b = a.
As Hillel points out, you can identify "a/b" with a*inverse(b), where "inverse" is the multiplicative inverse. And yes, there is no inverse(0). But really let's just stick with the previous definition: "a/b" * b = a.
Now consider "a/0". If "a/0" is in the field, then "a/0" * 0 = a. Let's consider the case where a != 0. Then we have "a/0" * 0 != 0. But this cannot be true if "a/0" is in the field, because for every x we have x * 0 = 0. Thus "a/0" is not in the field.
Consider "a/0" with a=0. Then "a/0" * 0 = 0. Any member of the field satisfies this equation, because for every x we have x * 0 = 0. So, "a/0" could be any member of the field. Our definition of division does not determine "0/0".
Whether you can assign "1/0" to a member of the field (such as 0) depends on how you define division.
If you actually write 1/0 in a manner that can be discovered through static analysis, that could just be a compile time error.
If you compute a zero, and then divide by it… I dunno. Probably what happened was the denominator rounded or truncated to zero. So, you actually have 1/(0+-e), for some type-dependent e. You have an interval which contains a ton of valid values, why pick the one very specific invalid value?
It does not. It is undefined
Ergo, x/x=1, so 0/0=1. You can use the same logic for x/0=any rational number.
Defining x/0=0 is impossibly arbitrary.
That is an intuition why division by zero is undefined.
Defining it arbitrarily is uninteresting.
Disapointing
Crashing would have been preferable.
1/0 = 0 is unsuitable and dangerous for anyone doing anything in the real world.
I've learned my lesson since, but still.
> Everything would have been just fine if dividing by zero yielded zero
perhaps you weren't making business decisions based on the reported average, just logging it for metrics or something, in which case I can see how a crash/restart would be annoying.
I imagine the problem was that it crashed the whole process, and so the processing of other, completely fine data that was happening in parallel, was aborted as well. Did that lead to that data being dropped on the floor? Who knows — but probably yes.
And process restarts are not instantaneous, just so you know, and that's even without talking about bringing the application into the "stable stream processing" state, which includes establishing streaming connections with other up- and downstream services.
Gleam offers division functions that return an error type, and you can use those if you need that check.
They fit a list-length use case well as they work better with a piping syntax which is popular in Gleam.
[1] https://blog.nestful.app/p/why-i-rewrote-nestful-in-gleam
That, however, is still not enough to alleviate OP's concerns, which is why I've explained how the `1/0=0` problem can be entirely avoided.
I expect entirely avoiding the problem OP mentioned is enough to alleviate the concerns it raises.
Nobody is questioning your intentions. People writing apps in memory-unsafe languages don’t give fewer shits. They’re just more prone to certain classes of errors.
> how the `1/0=0` problem can be entirely avoided
1/0 problems are generally expected to be entirely avoided. This is about where the system behaves unexpectedly, whether due to human error or the computer being weird.
All I was doing was clarifying the impression OP gave.
Now that we all know the details we can make whatever tradeoff we prefer.
Its the year of the Lord 2024, why is a new language putting in such a huge footgun out of the box in its stdlib.
Yes, but is that what any given developer will reach for first? Especially considering that an error-returning division is not composable?
The language puts people into a place where the instinctive design can cause very dangerous outcome, hard to see in a code review, unless someone on the team is a language lawyer. You probably don't want one of those on your team.
I think there's a reasonable argument for gleam to have an operator that does division resulting in zero but at the very least that should NOT be "/"
I guess that's slightly more of a warning than giving 0.
Intel's 80186 produced a result like that in one special case, because of a missing check in the microcode. This could be called a bug or an optimization: the "AAM" instruction was only documented as dividing by 10, but in fact takes a divisor as part of its opcode (D4 0A = divide by 10, as listed in the documentation; D4 00 = divide by zero). The normal divide instruction - as well as AAM on all other x86 processors - check for zero and throw an exception.
RISC-V just doesn't bother doing that.
Or rather how division could be implemented. Risc-V is an abstract instruction set architecture not born from a concrete chip, like x86 was; but they are trying to make things easy on the hardware.
- a sum type (or some wrapper type) `number | DIVISION_BY_ZERO` forces you to explicitly handle the case of having divided by zero
- alternatively, if the division operator only accepted the set `number - 0` as type for the denominator you'd have to explicitly handle it ahead of the division. Probably better as you don't even try to divide by zero, but not sure how many languages can represent `number - 0` as a type.
Positive = One | Succ Positive
Nat = Zero | Positive
NonZeroInt = Positive | Neg Positive
Int = Zero | NonZeroInt
Rational = Ratio Int Positive
etc.Depending on the language, these could be implemented with little or no runtime overhead.
I know that nested radicals can't always be un-nested, so I don't think larger sets (like the Algebraic numbers) can be reduced to a unique normal form. That makes comparing them for equality harder, since we can't just compare them syntactically. For large sets like the Computable numbers, many of their operations become undecidable. For example, say we represent Computable numbers as functions from N -> Q, where calling such a function with argument x will return a rational approximation with error smaller than 1/x. We can write an addition function for these numbers (which, given some precision argument, calls the two summand functions with ever-smaller arguments until they're within the requested bound), but we can't write an equality function or even a comparison function, since we don't know when to "give up" comparing numbers like 0.000... == 0.000....
FYI I'm currently playing around with numerical representations at http://www.chriswarbo.net/blog/2024-11-03-rationalising_deno...
1. Might crash.
2. Result may not be what you’d expect from conventional math.
3. Inputs and outputs are different types.
4. Nonlinear control flow i.e. exceptions.
Division isn’t even particularly special here. If you have fixed-width integer types (as most languages seem to) then this is a problem for all the basic operators.
3 and 4 are attractive solutions but can get annoying or cause more bugs. (How many catch blocks out there have zero test coverage?) Between 1 and 2, 1 is usually much better.
For cases where the programmer wants 2, you can provide alternate operators. For example, Swift crashes on overflow or with the standard operators, but has variants like &+ for modular arithmetic.
But one time an exception came at just the right time to cause the internal state and database state to be out of sync. That caused data updates in the service from that point on to start saving bad data into the database. It took a few hours to notice the issue and by that point a lot of the persisted data was trashed. We had to take down the service, restore the database from a backup, and reconstruct the correct data for the entire day.
Fortunately the data issues here were low impact, but it could just as easily have been critical data that was bad. And having a business operate on incorrect data like that could cause far bigger issues than a bit of downtime while the service restarts.
When software engineers make mistakes dividing by 0 and end up with Exceptions being raised or NaNs being output, they'll usually blame themselves.
When the results are wrong numbers all over the place, they'll blame the language.
There are 2 cases when people are going to "use" x/0:
1. They made a mistake.
2. They KNOW that x/0 returns 0 and they take it as a shortcut for (y == 0 ? 0 : x/y)
Is that shortcut useful? No. Is it dangerous? Yes. Hence, this is a bad idea.
My gripe with arbitrary choices like this is that it pushes complexity from your proof's initial conditions ("we assume x != 0") into the body of your proof (every time you use division now you've split your proof into two cases). The former is a linear addition of complexity to a proof, whereas the latter can grow exponentially.
Of course, nothing is stopping you from using an initial condition anyway to avoid the case splitting, but if you were going to do that why mess with division in the first place?
Without arbitrary precision numerics and functions which aren't explicit about corner cases it's always a simplification. However performance-/code-wise this is usually not feasible.
And why do you bring up infinity? In regular math, 1/0 is literally undefined. It's not infinity.
The question is what definitions will be useful and what properties you gain or give up. Being a partial function is a perfectly acceptable trade-off for mathematics, but perhaps it makes it difficult to reason about programs in some cases.
I suppose the aim of the article is to point out the issue is not one of soundness, which is useful — but I wish more emphasis had been put on the fact that it doesn't solve the question of what 1/0 should do and produced arguments with regards to that.
0 ∈ (−∞, ∞)
In combinatorics and discrete probability, `0**0 = 1` is a useful convention, to the point that some books define a new version of the operator - let's call it `***` - and define `a***b = a**b` except that `0***0 = 1` and then use the new operator instead of exponentiation everywhere. (To be clear, `**` is exponentiation, I could write `a^b` but that is already XOR in my mind.)
So one might as well overload the old one: tinyurl.com/zeropowerzero
This causes no problems unless you're trying to do real (or complex) analysis.
1/0 can cause a few more problems, but if you know you're doing something where it's safe, it's like putting the Rust `unsafe` keyword in front of your proof and so promising you know what you're doing.
It's really just a bit unfortunate that (x, y) -> x**y is not continuous at (0, 0).
And I think if you look at the Riemann sphere, the inverse of zero is the point where +infinity and -infinity meet. I would call that 0^(-1).
https://math.stackexchange.com/questions/1294852/why-does-wo...
Mathematica calls this point on the Riemann sphere "complex infinity".
I like that. I try to live by a similar protocol.
On topic: Each context has the right to establish their own rules.
If the rules work, the context survives. If not, then the context dies.
It's a version of "you can't divide by zero, but you can multiply the divisor on both sides of the equation and then use 0*a=0."
Note that infinity would be a fine answer IF MATHEMATICS COULD BE CONSISTENTLY EXTENDED to define it to be so, but this cannot be done (see below). Note that using infinity does not "break" mathematics (as some have suggested below) otherwise mathematicians would not use infinity at all.
If we have an expression that is not a number, such as 1/0, you can sometimes consistently define it to be something, such as a number or positive infinity or negative infinity, IF THAT WOULD BE CONSISTENT with the rest of mathematics. Let's see an example of the standard means of getting a consistent definition of exponentiation starting with its definition on positive integers and extending eventually to a definition for on a much bigger set, the rationals (ratios of signed integers).
We define 2 ^ N (exponentiation, "two raised to the power of N") for N a positive integer to be 2 multiplied by itself N times. For example: 2 ^ 1 = 2; 2 ^ 2 = 4; 2 ^ 3 = 8.
Ok, what is 2 ^ N where N is a negative integer? Well we did not define it, so it is nothing. However there is a way to CONSISTENTLY EXTEND the definition to include negative exponents: just define it to preserve the algebraic properties of exponentiation.
For exponents we have: (2 ^ A) * (2 ^ B) ("two raised to the power of A times two raised to the power of B") = 2 ^ (A+B) ("two raised to the power of A plus B"). That is, when you multiply, the exponents add. You can spot check it: (2 ^ 2) * (2 ^ 3) = 4 * 8 = 32 = 2 ^ 5 = 2 ^ (2 + 3).
So we can EXTEND THE DEFINITION of exponentiation to define 2 ^ -N for positive integer N (so a negative integer exponent) to be something that would BE CONSISTENT WITH the algebraic property above as follows. Define 2 ^ -N ("two raised to the power of negative N") to be (1/2) ^ N ("one half raised to the power N"). Check: (2 ^ -1) * (2 ^ 2) = ((1/2) ^ 1) * (2 ^ 2) = 1/2 * 4 = 2 = 2 ^ 1 = 2 ^ (-1 + 2).
Ok, what is 2 ^ 0 ("two raised to the power of zero")? Again, we have not defined it, so it is nothing. However, again, we can CONSISTENTLY EXTEND the definition of exponentiation to give it a value. 2 ^ 0 = (2 ^ -1) * (2 ^ 1) = 1/2 * 2 = 1. This always works out no matter how you look at it. So we say 2 ^ 0 = 1.
I struggled with this for days when I was a kid, literally yelling in disbelief at my parents until the would run away from me. I mean 2 ^ 0 means multiplying 2 times itself 0 times, which means doing nothing, so I thought it should be 0. After 3 days I finally realized that doing nothing IN THE CONTEXT OF MULTIPLICATION is multiplying by ONE, not multiplying by zero, so 2 ^ 0 should be 1.
Ok, is there a way to CONSISTENTLY EXTEND the definition of exponentiation to include non-integer exponents? Yes, we can define 2 ^ X for X = P / Q, where P and Q are integers (a "rational number"), to be 2 ^ (P/Q) = (2 ^ P) * (2 ^ -Q). All the properties of exponentials work out.
Notice how we can keep EXTENDING the definition of exponentiation starting from positive integers, to integers, to rationals, as long as we do so CONSISTENT with the properties of the previous definition of exponentials. I will not do go into the details, but we can CONSISTENTLY EXTEND the definition of exponentiation to real numbers by taking limits. For example, we can have a consistent definition of 2 ^ pi ("two raised to the power of pi") by taking the limit of 2 ^ (P/Q) as P/Q approaches pi.
HOWEVER, IN CONTRAST to the above extension of the definition of exponentiation, there is NO SUCH SIMILAR CONSISTENT EXTENSION to division that allows us to define 1/0 as ANY NUMBER AT ALL, even if we allow extending to include positive infinity and negative infinity.
The limit of 1/x as x goes to zero FROM THE POSITIVE DIRECTION = positive infinity. Some example points of this sequence: 1/1 = 1; 1/0.5 = 2; 1/0.1 = 10; 1/0.01 = 100, etc. As you can see the limit is going to positive infinity.
However, the limit of 1/x as x goes to zero FROM THE NEGATIVE DIRECTION = NEGATIVE infinity. Some example points from this sequence: 1/-1 = -1; 1/-0.5 = -2; 1/-0.1 = -10; 1/-0.01 = -100, etc. As you can see the limit is going to NEGATIVE infinity.
Therefore, since positive infinity does not equal negative infinity, there is NO DEFINITION of 1/0 that is consistent with BOTH of these limits at the same time. The expression 1/0 is NOT A NUMBER, even if you include positive and negative infinity, and mathematics cannot be consistently extended to make it into a number. Q.E.D.