myslsl
@myslsl@lemmy.world
- Comment on Science!!!!!!! 2 months ago:
I’m sorry my mom called you “pretty fucking dumb”. I know that must have hurt your feelings.
- Comment on Science!!!!!!! 2 months ago:
This feels pretty fucking dumb.
- Comment on Tell me Y 3 months ago:
Ain’t nothin’ but a heartache.
- Comment on what if the shop is empty? 3 months ago:
The punchline here is a little compact. I don’t feel like it really gives the closure I need. Maybe if the basis for the joke had more continuity the humor would be less discrete.
- Comment on I just cited myself. 4 months ago:
i has nice real world analogues in the form of rotations by pi/2 about the origin.
Since i=exp(ipi/2), if you take any complex number z and write it in polar form z=rexp(it), then multiplication by i yields a rotation of z by pi/2 about the origin because zi=rexp(it)exp(ipi/2)=rexp(i(t+pi/2)) by using rules of exponents for complex numbers.
More generally since any pair of complex numbers z, w can be written in polar form z=rexp(it), w=uexp(iv) we have wz=(ru)exp(i(t+v)). This shows multiplication of a complex number z by any other complex number w can be though of in terms of rotating z by the angle that w makes with the x axis (i.e. the angle v) and then scaling the resulting number by the magnitude of w (i.e. the number u)
Alternatively you can get similar conclusions by demoivre’s theorem if you do not like complex exponentials.
- Comment on I just cited myself. 4 months ago:
They don’t eventually become 1. Their limit is 1 but none of the terms themselves are 1.
A sequence, its terms and its limit (if it has one) are all different things. Notation like 0.999… refers to a limit of a particular sequence, not the sequence itself nor the terms of the sequence.
For example the sequence 1, 1/2, 1/3, 1/4, … has terms that get closer and closer to 0, but no term of this sequence is 0 itself.
Look at this graph. If you graph the sequence I just mentioned above and connect each dot you will get the graph shown in this picture (ignoring the portion to the left of the x=1).
As you go further and further out along this graph in the positive x direction, the curve that is shown gets closer and closer to the x-axis (where y=0). In a sense the curve is approaching the value y=0. For this curve we could certainly use wordings like “the value the curve approaches” and it would be pretty clear to me and you that we don’t mean the values of the curve itself. This is the kind of intuition that we are trying to formalize when we talk about limits (though this example is with a curve rather than a sequence).
Our sequence 0.9, 0.99, 0.999, … is increasing towards 1 in a similar manner. The notation 0.999… refers to the (limit) value this sequence is increasing towards rather than the individual terms of the sequence essentially.
I have been trying to dodge the actual formal definition of the limit of a sequence this whole time since it’s sort of technical. If you want you can check it out here though (note that implicitly in this link the sequence terms and limit values should all be real numbers).
- Comment on I wish I was as bold as these authors. 4 months ago:
- Comment on I just cited myself. 4 months ago:
My degree is in mathematics. This is not how these notations are usually defined rigorously.
The most common way to do it starts from sequences of real numbers, then limits of sequences, then sequences of partial sums, then finally these notations turn out to just represent a special kind of limit of a sequence of partial sums.
If you want a bunch of details on this read further:
A sequence of real numbers can be thought of as an ordered nonterminating list of real numbers. For example: 1, 2, 3, … or 1/2, 1/3, 1/4, … or pi, 2, sqrt(2), 1000, 543212345, … or -1, 1, -1, 1, … Formally a sequence of real numbers is a function from the natural numbers to the real numbers.
A sequence of partial sums is just a sequence whose terms are defined via finite sums. For example: 1, 1+2, 1+2+3, … or 1/2, 1/2 + 1/4, 1/2 + 1/4 + 1/8, … or 1, 1 + 1/2, 1 + 1/2 + 1/3, … (do you see the pattern for each of these?)
The notion of a limit is sort of technical and can be found rigorously in any calculus book (such as Stewart’s Calculus) or any real analysis book (such as Rudin’s Principles of Mathematical Analysis) or many places online (such as Paul’s Online Math Notes). The main idea though is that sometimes sequences approximate certain values arbitrarily well. For example the sequence 1, 1/2, 1/3, 1/4, … gets as close to 0 as you like. Notice that no term of this sequence is actually 0. As another example notice the terms of the sequence 9/10, 9/10 + 9/100, 9/10 + 9/100 + 9/1000, … approximate the value 1 (try it on a calculator).
I want to stop here to make an important distinction. None of the above sequences are real numbers themselves because lists of numbers (or more formally functions from N to R) are not the same thing as individual real numbers.
Continuing with the discussion of sequences approximating numbers, when a sequence, call it A, approximates some number L, we say “A converges”. If we want to also specify the particular number that A converges to we say “A converges to L”. We give the number L a special name called “the limit of the sequence A”.
Notice in particular L is just some special real number. L may or may not be a term of A. We have several examples of sequences above with limits that are not themselves terms of the sequence. The sequence 0, 0, 0, … has as its limit the number 0 and every term of this sequence is also 0. The sequence 0, 1, 0, 0, … where only the second term is 1, has limit 0 and some but not all of its terms are 0.
Suppose we define a sequence a1, a2, a3, … where each of the an numbers is one of the numbers from 0, 1, 2, 3, 4, 5, 6, 7, 8 or 9. It can be shown that any sequence of the form a1/10, a1/10 + a2/100, a1/10 + a2/100 + a3/1000, … converges (it is too technical for me to show this here but this is explained briefly in Rudin ch 1 or Hrbacek/Jech’s Introduction To Set Theory).
As an example if each of the an values is 1 our sequence of partial sums above simplifies to 0.1,0.11,0.111,… if the an sequence is 0, 2, 0, 2, … our sequence of partial sums is 0.0, 0.02, 0.020, 0.0202, …
We define the notation 0 . a1 a2 a3 … to be the limit of the sequence of partial sums a1/10, a1/10 + a2/100, a1/10 + a2/100 + a3/1000, … where the an values are all chosen as mentioned above. This limit always exists as specified above also.
In particular 0 . a1 a2 a3 … is just some number and it may or may not be distinct from any term in the sequence of sums we used to define it.
When each of the an values is the same number it is possible to compute this sum explicitly. See here (where a=an, r=1/10 and subtract 1 if necessary to account for the given series having 1 as its first term).
So by definition the particular case where each an is 9 gives us our definition for 0.999…
To recap: the value of 0.999… is essentially just whatever value the (simplified) sequence of partial sums 0.9, 0.99, 0.999, … converges to. This is not necessarily the value of any one particular term of the sequence. It is the value (informally) that the sequence is approximating. The value that the sequence 0.9, 0.99, 0.999, … is approximating can be proved to be 1. So 0.999… = 1, essentially by definition.
- Comment on I just cited myself. 4 months ago:
Some software can be pretty resilient. I ended up watching this video here recently about running doom using different values for the constant pi that was pretty nifty.
- Comment on I just cited myself. 4 months ago:
What exactly do you think notations like 0.999… and 0.333… mean?
- Comment on I just cited myself. 4 months ago:
Yes, informally in the sense that the error between the two numbers is “arbitrarily small”. Sometimes in introductory real analysis courses you see an exercise like: “prove if x, y are real numbers such that x=y, then for any real epsilon > 0 we have |x - y| < epsilon.” Which is a more rigorous way to say roughly the same thing. Going back to informality, if you give any required degree of accuracy (epsilon), then the error between x and y (which are the same number), is less than that required degree of accuracy
- Comment on I just cited myself. 4 months ago:
You are just wrong.
The rigorous explanation for why 0.999…=1 is that 0.999… represents a geometric series of the form 9/10+9/10^2+… by definition, i.e. this is what that notation literally means. The sum of this series follows by taking the limit of the corresponding partial sums of this series (see here) which happens to evaluate to 1 in the particular case of 0.999… this step is by definition of a convergent infinite series.
- Comment on I just cited myself. 4 months ago:
He is right. 1 approximates 1 to any accuracy you like.
- Comment on eigenspaces 6 months ago:
Eigenvectors, values, spaces etc are all pretty simple as basic definitions. They just turn out to be essential for the proofs of a lot of nice results in my opinion. Stuff like matrix diagonalization, gram schmidt orthogonalization, polar decomposition, singular value decomposition, pseudoinverses, the spectral theorem, jordan canonical form, rational canonical form, sylvesters law of inertia a bunch of nice facts about orthogonal and normal operators, some nifty eigenvalue based formulas for the determinant and trace etc.
- Comment on eigenspaces 6 months ago:
My experience with eigenstuff has been kind of a slow burn. At first it feels like “that’s it?”, then you do a bunch of tedious calculations that just kind of suck to do… But as you keep going they keep popping up in ways that lead to some really nice results in my opinion.
- Comment on That wasn't a microwave safe bowl... 7 months ago:
On the brightside, you are now the proud owner of a hip designer bean plate.
- Comment on Especially with a real one! 10 months ago:
These are not mutually exclusive. Believe in yourself.
- Comment on [deleted] 1 year ago:
If you go see an independent mental health professional aren’t there only two outcomes?
-
They confirm you are as mentally fit as you think you are and you go on with your life (possibly with more resources than you previously had if your mental health really does take a dive).
-
They identify something that is actually wrong with your mental health and help you fix or learn to cope with the issue.
I’m not seeing the downside here. Who cares if it’s coming from some shady government boogeyman or some random stranger?
-
- Comment on Does .999… = 1? 1 year ago:
Limits don’t disprove this at all. In order to prove 0.999…=1 you need to first define what 0.999… even means. You typically define this as infinite geometric series with terms 9/10, 9/100, 9/1000 and so on (so as the infinite sum 9/10+9/100+9/1000+…). By definition this is a sequence of partial sums, each partial sum is a finite geometric sum which you would typically rewrite in a convenient formula using properties of geometric sums and take the limit (see the link).
The thing is that it follows from our definitions that 0.999… IS 1 (try and take the limit I mentioned), they are the same numbers. Not just really close, they are the same number.
math15fun.com/2017/…/finding-limits-graphically/ If a limit exists… (such as the case in this link), -1 is a hole… but not -0.999999…
What you’re saying here isn’t actually true because -0.999… and -1 are the same number. -0.9, -0.99, -0.999 and so on are not holes, but -0.999… is a hole, because it is the number -1.
You see the distinction here? Notations -0.9, -0.99, -0.999 and so on are all defined in terms of finite sums. For example -0.999 is defined in terms of the decimal expansion -(9/10+9/100+9/1000). But -0.999… is defined in terms of an infinite series.
The same sort of reasoning applies to your other decimal examples.
It’s even more apparent in “weird” functions like the one outlined here… math.stackexchange.com/…/limits-of-functions-with… for x=1 the output is 2… but for x=0.99999… it’s 1.
You take limits of functions. The first limit is the limit of a function f that, according to the diagram of the problem, approaches 1 as x goes to 1. But the second limit is the limit of a constant function that always maps elements of its domain to the value 2 (which is f(1)). You can show using the epsilon delta definition of the limit that such a limit will be equal to 2.
The notation here might be a little misleading, but the intuition for it is not so bad. Imagine the graph of your constant function 2, it’s a horizontal line at y=2.
But I think that it’s a matter of the origin of the 0.9999…
This is correct. It follows directly from the definition of the notation 0.999… that 0.999…=1.
I don’t think that 3/3 is ever actually 0.9999… but rather is just a “graphical glitch” of base 10 math. It doesn’t happen in base12 with 1/3, but 1/7 still does.
Then you are wrong. 3/3 is 1, 0.999… is 1, these are all the same numbers. Just because the notation can be confusing doesn’t make it untrue. Once you learn the actual definitions for these notations and some basic facts about sums/series and limits you can prove for yourself that what I’m saying is the case.
I do accept that we can just presume 0.999… can just be assumed 1 due to how common 3*(1/3) is.
It’s not an assumption or presumption. It is typically proved in calculus or real analysis.
But I do think it throws a wrench in other parts of math if we assume it’s universally true. Just like in programming languages… primarily float math that these types of issues crop up a lot, we don’t just assume that the 3.999999… is accurate, but rather that it intended 4 from the get-go, primarily because of the limits of the space we put the number in.
It definitely doesn’t throw a wrench into things in other parts of math. Ieee floats just aren’t comparable. With ieee floats you always have some finite collection of bits representing some number. The arrangement is similar to how we do scientific notation, but with a few weird quirks (like offsets in the exponent for example) that make it kinda different. But there’s only finitely many different numbers that these kinds of standards can represent due to there only being finitely many bit patterns for your finite number of bits. The base 10 representation of a number does not have the same restriction on the number of digits you can use to represent numbers. When you write 0.999…, there aren’t just a lot (but finitely many) 9’s after the decimal point, there are infinitely many 9’s after the decimal point.
In a programming context, once you start using floating point math you should avoid using direct equality at all and instead work within some particular error bound specified by what kind of accuracy your problem needs. You might be able to get away with equating 4.000001 and 4 in some contexts, but in other contexts the extra accuracy of 0.0000001 might be significant. Ignoring these kinds of distinctioms have historically been the cause of many weird and subtle bugs.
I have no reason to believe that this isn’t the case for our base10 numbering systems either.
The issue here is that you don’t understand functions, limits, base expansions of numbers or what the definition of notation lime 0.999… actually is.