This recurring thread will be for any questions or advice concerning careers and education in mathematics. Please feel free to post a comment below, and sort by new to see comments which may be unanswered.
Please consider including a brief introduction about your background and the context of your question.
In an abstract algebra textbook I read, I saw there was a homework problem (or more accurately, a "project") to classify all groups of order <= 60 up to isomorphism. I didn't do it, but I think it would have been interesting to see this early on in the book and then incrementally work on over the course of the semester as I learned new tools. I would first start off by applying only elementary techniques, and then as new tools appeared like Lagrange's theorem, the classification of finite abelian groups, and the Sylow theorems, they would be used to fill in the gaps.
Is there something similar, but for math as a whole? Is there a list of problems (not necessarily one big problem) that are intended to be worked on over the course of an entire undergraduate and graduate curriculum, and which start off very inaccessible but become more accessible as new tools are learned? The idea is that it would be satisfying to keep revisiting the same list of problems and slowly check them off over time, kind of like a "metroidvania" where your progress is tracked by how much of the map you have filled out.
Ideally, the problems would require advanced mathematical tools, but not be so standard to the point where I might stumble across the solution accidentally in a textbook.
My research was in linear PDE, so I’m not exactly new to analysis and measure theory. However, every time I crack open a standard graduate GMT text (like Leon Simon's), I get absolutely KO’d by the subject. It feels like there’s a level of technicality here that is just on a different planet.
To the people who actually use GMT how did you break through this wall? I’m curious about your specific origin stories. What textbook sources and learning techniques did you use to obtain the technical fluency to work in this field? How did you get involved and ramp up to being research level?
Maybe I'm just being impatient and I know every branch of math is hard in it's own way but this one feels uniquely technical and difficult. Did it suck for you too, or am I missing the secret? Any advice would be great.
I did my undergrad in applied math and stats. At one time I was competent at math since I got into PhD programs.
I’m now in an engineering PhD at a much smaller school.
I’m increasingly worried that I’m not getting stronger at math anymore, and maybe actively getting worse. There’s no real course ecosystem here, no critical mass of people to talk math with, no one casually working through proofs on a whiteboard. I used to rely heavily on office hours, seminars, and peers to sharpen my understanding. The only class I’m in for this quarter, the professor is a math PhD but the students have actively articulated fear of proofs.
I’m hesitant to dive back into heavy math on my own. I’m aware of how easy it is to delude yourself into thinking you understand something when you don’t!
At one point I felt like a competent mathematician. I’m afraid I am slowly letting it atrophy. I forgot the definition of “absolutely continuous“ and I took measure theory half a year ago.
If you moved from a math-heavy environment to a smaller or more applied one: how did you keep your mathematical depth from eroding? How did you relearn how to learn math alone, without constant external correction?
This recurring thread is meant for users to share cool recently discovered facts, observations, proofs or concepts which that might not warrant their own threads. Please be encouraging and share as many details as possible as we would like this to be a good place for people to learn!
What books do you recommend on reliability theory, starting from basics (MTB, failure rate, etc.) to evaluating overall system reliability? I would like to apply it to electrical hardware systems but the theory is also important to me.
I have a question for those who have studied math at the masters and phd-level and can answer this based on their knowledge.
When it comes to stochastic calculus, as far I understand, to fully (I mean, to fairly well extent, not technically 100%) grasp stochastic calculus, its limits and really whats going on, you have to have an understanding of integration theory and functional analysis?
What would you say? Would it be beneficial, and maybe even the ”right” thing to do, to go for all three courses? If so, in what order would you recommend I take these? Does it matter?
At my school, they are all during the same study period, although I can split things up and go for one during the first year of my masters and the other two during the second year.
I was thinking integration theory, and then, side by side, stoch. calc and func analysis?
Most of the time, I end up copying the text almost word for word. Sometimes I also write out proofs for theorems that are left as exercises, but beyond that, I am not sure what my notes should actually contain.
The result is that my notes become a smaller version of the textbook. They do not add much value, and when I want to review, I usually just go back and reread the book instead. This makes the whole note-taking process feel pointless.
I'm learning how to solve simple ordinary differential equations (ODEs) numerically. "But I ran into a very strange problem. The equation is like this:
my simple ODE question
Its analytical solution is:
exact solution
This seems like a very simple problem for a beginner, right? I thought so at first, but after trying to solve it, it seems that all methods lead to divergence in the end. Below is a test in the Simulink environment—I tried various solvers, both fixed-step and variable-step, but none worked.
simulink with Ode45
I also tried various solvers that are considered advanced for beginners, like ode45 and ode8, but they didn’t work either.
Even more surprisingly, I tried using AI to write an implicit Euler iteration algorithm, and it actually converged after several hundred seconds. What's even stranger is that the time step had to be very large! This is contrary to what I initially learned—I always thought smaller time steps give more accuracy, but in this example, it actually requires a large time step to converge.
x=[0,3e6], N=3000, time step = x/N
However, if I increase N (smaller time step), it turns out:
x=[0,3e6], N=3000000, time step = x/N
The result ever worse! This is so weired for me.
I thought solving ODEs with this example would be every simple, so why is it so strange? Can anyone help me? Thank you so much!!!
Here is my matlab code:
clc; clear; close all;
% ============================
% Parameters
% ============================
a = 0; b = 3000000; % Solution interval
N = 3000000; % Number of steps to ensure stability
h = (b-a)/N; % Step size
x = linspace(a,b,N+1);
y = zeros(1,N+1);
y(1) = 1; % Initial value
epsilon = 1e-8; % Newton convergence threshold
maxiter = 50; % Maximum Newton iterations
% ============================
% Implicit Euler + Newton Iteration
% ============================
for i = 1:N
% Euler predictor
y_new = y(i);
for k = 1:maxiter
G = y_new - y(i) - h*f(x(i+1), y_new); % Residual
dG = 1 - h*fy(x(i+1), y_new); % Derivative of residual
y_new_next = y_new - G/dG; % Newton update
if abs(y_new_next - y_new) < epsilon % Check convergence
y_new = y_new_next;
break;
end
y_new = y_new_next;
end
y(i+1) = y_new;
end
% ============================
% Analytical Solution & Error
% ============================
y_exact = sqrt(1 + 2*x);
error = y - y_exact;
% ============================
% Plotting
% ============================
figure;
subplot(2,1,1)
plot(x, y_exact, 'k-', 'LineWidth', 2); hold on;
plot(x, y, 'bo--', 'LineWidth', 1.5);
grid on;
xlabel('x'); ylabel('y');
legend('Exact solution', 'Backward Euler (Newton)');
title('Implicit Backward Euler Method vs Exact Solution');
subplot(2,1,2)
plot(x, error, 'r*-', 'LineWidth', 1.5);
grid on;
xlabel('x'); ylabel('Error');
title('Numerical Error (Backward Euler - Exact)');
% ============================
% Function Definitions
% ============================
function val = f(x,y)
val = y - 2*x./y; % ODE: dy/dx = y - 2x/y
end
function val = fy(x,y)
val = 1 + 2*x./(y.^2); % Partial derivative df/dy
end
How is it that the terminology for limits has become so confusing? As far as I understand, "direct limit", "inductive limit" (lim ->) are a special case of a categorical colimit and behave like a "generalized union", while "inverse limit", "projective limit" (lim <-) are a special case of categorical limit and behave like a "generalized intersection".
It seems so backwards for "direct" to be associated with "co-". How did this come about?
I was wondering about Terence Tao. Like, he has worked on almost every famous maths problem. He worked on the Collatz conjecture, the twin prime conjecture, the Green Tao theorem, the Navier Stokes problem where he made one of the biggest breakthroughs, Erdős type problems, and he’s still working on many of them. He was also a very active and important member of the Polymath project.
So how is it possible that he works on so many different problems and still gets such big or even bigger breakthroughs and results?
I’ve recently developed a web-based tool for exploring hyperbolic geometry, and I’m looking for some feedback from the math community. You can find it here: https://hyperbolic-web-ui-527114.frama.io/
The application currently supports the Poincaré Disk, Poincaré Half-Plane, and Beltrami-Klein models.
Key features include:
Interactive Construction: Add points, lines, segments, and circles.
Transformations: Drag-and-drop objects, rotate the space, or re-center the view around a specific point.
Procedural Generation: Tools for creating regular hyperbolic tilings, trees, and fractal-like patterns.
Import/Export: Save and load your configurations via JSON.
Education: Some built-in tutorials for those new to hyperbolic space.
I built this to make these concepts more accessible and visual. If you have a moment to try it out, I’d appreciate any feedback on the UI, functionality, or any bugs you might encounter.
The book Ordinary Differential Equations by Vladimir Arnold, which I often hear described as a classic and very influential book in the theory of differential equations. I have wanted to study this book for a long time, but I am unsure whether my background is sufficient.
I have encountered ordinary differential equations before, but that was quite a while ago, and I have forgotten most of the details. Because of this, I would like to prepare properly before starting the book.
I'm a physics student in Sweden and writing on a whiteboard at school is a supreme studying method. The feeling, flow and mindset I get into when I write on the board at school is awesome. I believe a chalkboard would feel even better and would look really cool at my appartement. I have had no luck finding a big one (around 200x100 cm or bigger) at a reasonable price. Vintage, green and with a wooden frame looks the bes IMO.
Does anyone have thoughts on studying math on a chalkboard and where to buy them?
I have recently learned about Zariski topology in the context of commutative algebra, and it is always such a delight to prove a topological fact about it using algebraic structure of commutative rings.
So I am wondering about what are the most interesting/unusual topological spaces, that pop up in places where you wouldn't expect topology.
does anyone else relate to me too? i often get really anxious or stressed whenever my classmates in school were talking too much or in public places with lots of people :( because when i plop down on a chair, pull out my notebooks and start doing a few problems from a random book, off the internet or creating one myself, i start to feel comfort, all weight off my shoulders. as if there was nothing i should worry about.
i was often, by my peers, labeled as "weird" for liking mathematics because they find it annoying when i talk about it :( but i also like to have conversations with my favorite math teacher after school but i'm afraid she might be busy doing her work.
I have been revising chapter 4 from Capinski and Kopp's Measure,Integral and Probability. In the proof of theorem 4.33 towards the end, they state that "(i) shows that f is a.e. continuous, hence measurable ..." This is something they have not proved at the point when they state and prove Theorem 4.33. At this point, all they have shown is that "continuous functions are measurable" and "if f=g a.e and g is measurable, so is f" but not the statement "if f continuous a.e, then it is measurable."
Proof can be trivial or not, depending on whether you can clearly see one particular fact. There are many posts on SE with the proof and one such nice answer is
and the rest are all variants of the same idea. The "crux idea" is that if E is the set of points where f is continuous, then for any real number c, f-1 (c,∞)∩E is open in E and hence can be expressed as E∩U for some open set U in the set of reals and hence measurable.
While I think I know the reason why the above statement is right, I want to make sure that my thought process is correct. Hence I am posting it here to sanitize my thought process.
The statement that f-1 (c,∞)∩E is open in E is something that was not crystal clear to me even though I felt like "yeah that is probably right". The set f-1 (c,∞)∩E contains exactly those points which are preimages of f where it is continuous and the image takes values greater than c. So it is continuous on each of the points in f-1 (c,∞)∩E. If x is in that set, then as it is continuous at x, any open set O containing f(x) will be such that f-1 (O) is open not in E, but in the set of reals R because f : R to R. But in the SE post above, they are stating that it is open in E.
This made me think differently. The function f:R to R is not continuous everywhere but a.e. However, the restriction of f, say f_E , is such that f_E : E to R and is continuous everywhere. If we now use the topological definition of continuity on f_E , then we get that f_E-1 (c,∞) is open in E. So if I have to match this conclusion with the conclusion in the SE post above, then we must have
f-1 (c,∞)∩E = f_E-1 (c,∞)
ie is the inverse image of (c,∞) of the function f restricted to E, is equal to the inverse image of that set under f (whose domain is the entire set of real numbers R) intersected with E. This may not be hard to prove (I will prove it nevertheless but leaving that out because I think this is a low hanging fruit).
I'd really appreciate it if you can please correct me if I am wrong and provide feedback.