More than most fields, mathematicians define themselves by their discipline rather than their topic. A proposition is not considered to be mathematically known until it has been proven. Since people employed as academics are judged by the production of new knowledge, mathematicians see their business as proving things. Therefore, mathematics has come to be seen not as the study of numbers, but as the study of things which can be proven. This leads to the curious result that a person who spent their time learning all about numbers, on reading facts about numbers asserted by others, and on thinking about numbers, but not on proving anything or on reading others' proofs, would not be considered a mathematician by many mathematicians, even though most other human beings would call em a mathematician (and according to most dictionary definitions, such a person would be a mathematician).
It should be noted that the identification of mathematics with things that can be proven, where "proven" is identified with "an irrefutable argument", would be objected to by some philosophers, who believe that irrefutable arguments exist about certain philosophical topics which are not in the form of math-style proofs (perhaps "I think therefore I am" is one, or more to the point "You think, therefore you are"; philosophically speaking, it is clear to anyone able to understand that sentence that they think, therefore it follows that they exist; but by the standards of a mathematical proof, no justification has been provided for the premise "you think", and so "you are" has only been proven contingent on the acceptance of the axiom "you think").
In mathematics there are "existence proofs". The name is misleading; an existence proof is really just a proof that some construct could POTENTIALLY exist in the sense that its definition is not inherently contradictory (not a proof of ACTUAL existence in the sense that, e.g., a human with your name and age exists in your town this year). In order words, it is a proof that NOT(it is necessarily true that the construct does not exist).
in mathematics often names for things are 'stolen' from ordinary English and then used to mean something very technical and specific. I say the names are 'stolen' because they are used to mean something very different from the ordinary English meaning. I mention this to save you from wasting time trying to find a substantial connection between mathematical definitions and the ordinary English meanings of their names; typically the connection is so slight so as to be almost irrelevant.
For example, the names 'group' and 'set' both mean something very specific (and different from each other) in math. These technical meanings are NOT derived from some close analysis of the exact difference between what 'group' and 'set' mean in English; rather, in English, both of these words mean collections of objects, and in math, at one time, someone (Galois) came up with some very technical thing that needed a name, and it was a special type of collection of object, so he stole the name 'group' for his concept, and then later on someone else came up with some other very technical thing that needed a name, and it was also a special type of collection of object, and perhaps they knew that 'group' was already claimed, so they stole the name 'set' for their concept. If you took someone who knew English but no math and taught them the definitions of these two concepts and then asked which should be called 'group' and which should be called 'set', they wouldn't reliably be able to tell you.
Similarly we have 'normal matrices' and 'normal subgroups'. The definitions of these technical concepts are only tangentially related to the English definition of 'normal', and you probably would not be able to derive the mathematical definitions of these just from knowing all about matrices and subgroups and also knowing the ordinary English definition of the word 'normal'. Nor would you gain great insight into the concept of 'normalacy' in the ordinary English sense from pondering the mathematics of normal matrices and normal subgroups.
Addition, its inverse subtraction, multiplication, and its inverse division are called elementary arithmetic. If you start with positive integers and only apply addition and multiplication, you will always get positive integers back. If you start with non-negative integers, you will always get non-negative integers back. If you allow subtraction, you may get negative integer outputs, too. If you allow division, you may get rational number outputs (fractions), and you also have the tricky case of division-by-zero, which is usually thought of as 'undefined' or an illegal operation, but in some contexts is defined to be infinity, with the sign of the infinity being the sign of the numerator (often 0/0 is still a special case here; sometimes an 'unsigned infinity' or NaN? is introduced for this) [1].
For rational or real numbers, elementary arithmetic obeys the "field axioms".
There are other things that obey the field axioms; such things are called "fields". One example is 'finite fields'; for each prime number p, there is a finite field "of order" p, which is basically the integers mod p (that is, adding p to any number gives you the same number back; adding p+1 is the same as adding 1; etc; you can see that there are really only p distinct elements in this field). There are also more complicated finite fields eg [2]. The concept of a 'field' does not include exponentiation or further hyperoperations ( https://en.wikipedia.org/wiki/Field_%28mathematics%29#Exponentiation ).
Elementary arithmetic can be axiomatized by the Peano axioms.
"Successor" is the unary operation S(x) = x + 1 (also called "increment").
You may have noticed that addition is repeated successor, multiplication is repeated addition, and exponentiation is repeated multiplication. This suggests that these three operations are really facets of the same concept, and that they could be generalized to a three-input function that can be addition, multiplication, or exponentiation in the first two arguments depending on the third argument. The best-known formalization of this idea is an operation called "hyperoperation".
Some important arithmetic functions are:
deriving from positive vs negative integers:
deriving from trying to do division with integers instead of reals:
deriving from converting reals to integers:
You also hear about the 'reciprocal' function, which is just a special case of division: reciprocal(x) = 1/x.
There are also the inequality operators, which can be thought of as Boolean functions, that is, functions from inputs to an output which is always either "TRUE" or "FALSE":
The main "trig" (trigonometry) functions are:
These relate the angles in triangles to the ratios of the lengths of the sides of right triangles:
That is, the input to these function is an angle, and the output is the indicated ratio. This won't make much sense without the diagram: https://en.wikipedia.org/wiki/Trigonometric_functions#/media/File:Trigonometry_triangle.svg
The input, which are angles, can be in units of degrees (360 degrees go all the way around the circle), or radians (2*pi radians go all the way around the circle).
A mnemonic to help remember which is which is: SOHCAHTOA, which doesn't look very memorable until you read it aloud; it is pronounced SO-KUH-TOE-UH: Sin Opposite over Hypotenuse, Cos Adjacent over Hypotenuse, Tan Opposite over Adjacent.
Another way to think about these is that if you start at the rightmost point of a unit circle (which is point (1,0)) and proceed to trace the circle counterclockwise by the input angle, then the point you end up at has a horizontal coordinate of cos(angle) and a vertical coordinate of sin(angle) (note that cos(0) = 1 and sin(0) = 0). This can be seen by drawing right triangles inside a unit circle: https://en.wikipedia.org/wiki/Trigonometric_functions#/media/File:Webysther_20160305_-_Ciclo_trigonom%C3%A9trico.svg (see also https://www.khanacademy.org/math/trigonometry/unit-circle-trig-func/unit-circle-definition-of-trig-functions/v/unit-circle-definition-of-trig-functions-1 ).
tan(a) is equal to sin(a)/cos(a) (b/c (opposite/hypotenuse)/(adjacent/hypotenuse) = opposite/adjacent), so it's the ratio of the vertical coord to the horizontal coord (note that this ratio goes to infinity at +-pi/2, or +-90 degrees, where the horizontal coordinate is zero). In the unit circle interpretation, tan is the length of a certain tangent vertical line segment, see https://upload.wikimedia.org/wikipedia/commons/c/c7/Tangent-unit-circle.svg and https://www.youtube.com/watch?v=6-tsSJx-P18 .
There are also the inverses of these: asin, acos, atan (which are the same as arcsin, arccos, arctan). Because there are multiple angle inputs that map to the same output with sin, cos, or tan, a convention is needed
Sometimes you also hear about sec (secant), csc (cosecant), cot (cotangent). These are just fancy names for the reciprocals of cos, sin, tan.
https://en.m.wikipedia.org/wiki/Algebraic_number#Examples
todo
define monic polynomial in math chapter
http://mathworld.wolfram.com/ConstructibleNumber.html
hierarchy of reals, closed form, algebraic, transcendental (also define rational, irrational) etc
todo
sub-(whatever; eg subset, subgroup), direct product, homomorphism
relations note: a set can be composed with a relation to get another set
functions
arity
infix notation for binary functions
boolean functions
sets, characteristic functions (mb this is not exactly universal algebra)
injective, surjective, bijective
image
inverse
invertible
inverse image
monotone functions
equivalence relation
equivalence classes
total order
partial order
hasse diagram
preorder
graph
DAG
tree
isomorphism
automorphism
homomorphism
quotient
endomorphism
embedding
inclusion maps
cover
direct products
semilattices
lattices
galois connection
groups
Group theory could have arose (i'm not saying this is how it actually happened, historically) from noticing some properties of addition and multiplication (except that for multiplication to be considered a group, zero must be omitted) and abstracting those properties; then noticing that the same properties are also useful for describing the compositions of permutation operations, and the compositions of rotation and reflection operations. Note that often it makes more sense to think of groups as fundamentally describing the composition of permutation operations, instead of thinking of them as generalizations of addition and multiplication.
For historical reasons, commutative groups are usually called 'Abelian' instead of commutative. I'll say commutative here.
cyclic groups
the symmetric groups
fields
tarski/dedekind's high school algebra axioms (and the tarski high school algebra problem)
example of noncommutative yet associative operations: rotations, string concatenation
example of nonassociative operation: subtraction
example of commutative yet nonassociative operation: "Consider the set of all points on a line (or a plane, or 3-dimensional space --- it won't matter for the example) and let the operation send each pair (P,Q) of points to the midpoint of the segment PQ" -- http://math.stackexchange.com/questions/327422/is-all-algebraic-commutative-operation-always-associative
https://en.wikipedia.org/wiki/Commutative_non-associative_magmas : consider the game rock paper scissors; let's have r,p,s and when you apply 'op' (represented here by infix '*') to two of them, you get the one that 'wins', eg r*p = p*r = p, because paper beats rock. * is commutative but non-associative; r*(p*s) = r != s = (r*p)*s
this table of (the names given to) generalizations of groups (by dropping various group axioms) is useful: [3]
rings and fields: one of the applications of groups is the characterization of addition and multiplication. By adding in more properties of addition and multiplication, we get rings, and from adding in even more, we get fields. Note that not too much intuition can be drawn from the choice of names 'group', 'ring', 'field'; don't try to visualize a ring as a ring on your finger, for example; just imagine that these are arbitrary words used to describe a bunch of things with attached operation(s) which obey certain axioms.
A ring is a set with two binary operations, call them + and * (or addition and multiplication), with the following axioms:
Note that some authors don't have the condition that * has an identity; but other authors call that a 'rng' and reserve 'ring' for the structure given here (the one with the multiplicative identity requirement). A 'commutative ring' means a ring in which the * operation is commutative (the + operation is always commutative); but some books just call commutative rings 'rings'.
A field is a commutative ring such that every element, except zero, has a multiplicative inverse. Note: the 'except zero' there is what prevents us just from saying that addition and multiplication are just two groups on the same set of elements.
An example of an abstract field which feels pretty different from ordinary addition and multiplication over numbers is the finite field with four elements called "GF(4)":
Addition +
| 0 1 A B |
0
| 0 1 A B |
| 1 0 B A |
| A B 0 1 |
| B A 1 0 |
Multiplication
| 0 1 A B |
0
| 0 0 0 0 |
| 0 1 A B |
| 0 A B 1 |
| 0 B 1 A |
Note that when restricted to the subset {0,1}, this field works like boolean logic, where + is XOR and * is AND. Note that, counterintuitively, in this field you can't start with 0 and successively add 1 to generate all of the elements; 0+1 = 1 but 1+1 = 0, so you never get to A or B.
(from [4])
To summarize, groups, rings, and fields are about operators over some set of objects and the properties of these operators. A group is the most general and a field is the most specific, and a ring is in the middle.
Continuing to summarize, a group is a bunch of objects with one binary operation, 'multiplication', such that the operation is associative and invertible and one object is the multiplicative identity (there's also the closure/totality axiom but i'll skip over that in this paragraph). A field is a bunch of objects with two binary operations, 'addition' and 'multiplication', such that addition is a commutative group, multiplication is a commutative group except that it's not fully invertible because zero doesn't have a multiplicative inverse, and we have distributivity of multiplication over addition. A ring is somewhere in the middle; we have two operations, addition and multiplication, and addition is a commutative group, and we have distributivity, but multiplication need not be commutative, nor have any multiplicative inverses.
Matrices are rectangular arrays of numbers. But if that's ALL they were, then why is matrix multiplication this weird thing, instead of just multiplying the corresponding numbers together (elementwise multiplication)? The answer is that, really, 'arrays of numbers' is a data representation that is more general than matrices. Matrices just happen to be one thing that can be represented using rectangular arrays. In math, matrices are fundamentally the same as linear functions. There is a theorem that any linear function can be represented as a 2-d matrix (todo).
(todo define linear functions)
Linear functions (=linear transformations) have been shown to be nothing more than combinations of rotations and stretchings (and creation or deletion of dimensions, but you can think of that as an extreme version of stretching) (todo)
The determinant is the formula for measuring how much area/volume/etc the image of a unit square has after the transformation has been applied to it. If one dimension was deleted, then the transformed unit square has 'zero' area/etc in the image, in the same way that the area of a rectangle is non-zero but the area of a line segment is zero. This is why zero determinants are useful for detecting deletion of dimensions.
Eg a linear transformation with no stretching. The determinant of an isometry is either 1 or -1.
Isometries correspond to 'orthonormal' matrices; an orthonormal matrix is one whose rows and columns are both unit vectors and orthogonal to each other.
Orthonormal matrices have the properties that:
Q^T Q = Q Q^T = I
or in other words,
Q^T = Q^{-1}
The complex analog of 'orthogonal matrix' is 'unitary matrix'. Like orthonormal matrices, unitary matrices represent isometries; the absolute value of their determinant is 1; and they preserve norms, that is, for any two complex vectors, if you multiply each of them by the same unitary matrix and then take their inner product, you get the same result as if you had taken the inner product directly.
http://matrixmultiplication.xyz/
A pair of random high-dimensional vectors are likely to be almost orthogonal.
Unlike in 3-D, most of the volume in a hypercube is near the 'corners'.
Similarly, if instances of some class vary along very many dimensions, it is likely that the vast majority of instances are outliers along at least one dimension.
Covering a high-dimensional hypercube with a grid of points requires exponentially many grid points (in the dimension of the space) -- it is usually infeasible to sample all of these points.
A small set of points in a high-dimensional space can be embedded into a space of much lower dimension in such a way that distances between the points are nearly preserved (the Johnson–Lindenstrauss? lemma [5]).
These sorts of facts can make dealing with high-dimensional spaces unintuitive. They can also be put to use, eg the Random Projection method of dimensionality reduction, the [6] of signal reconstruction for signals known to be sparse, or random construction of LDPC error correcting codes.
"Homotopy of two continuous mappings f,g: X -> Y. A formalization of the intuitive idea of deformability of one mapping into another." -- http://www.encyclopediaofmath.org/index.php/Homotopy
"In topology, two continuous functions from one topological space to another are called homotopic if one can be "continuously deformed" into the other, such a deformation being called a homotopy between the two functions.
"The two dashed paths shown above are homotopic relative to their endpoints. The animation represents one possible homotopy." -- http://en.wikipedia.org/wiki/Homotopy
"More exactly, two mappings and are called homotopic (denoted by ) if there exists a family of continuous mappings f_t : X -> Y, continuously depending on a parameter t \in [0,1], such that f_0 = f, and f_1 = g." -- http://www.encyclopediaofmath.org/index.php/Homotopy
"Consider a world where objects are made of elastic rubber. Two objects are considered equivalent if they can be deformed into each other without tearing the material. If such a transformation between X and Y exists, we say they are topologically equivalent..." -- http://www.ayasdi.com/_downloads/Topological_Analysis_of_Population_Activity_in_Visual_Cortex.pdf
"...it is evident that a possible reason for two objects not to be equivalent is that they differ in the number of holes. Thus, simply counting holes can provide a signature for the object at hand. Holes can exist in different dimensions. A one-dimensional hole is exposed when a one-dimensional loop (a closed curve) on the object cannot be deformed into a single point without tearing the loop. If two such loops can be deformed into one another they define the same hole, which should be counted only once. Analogous definitions can be invoked in higher dimensions. For example, a two-dimensional hole is revealed when a closed two-dimensional oriented surface on the object cannot be deformed into a single point.
This notion of counting holes of different dimensions is formalized by the definition of Betti numbers. The Betti numbers of an object X can be arranged in a sequence, b ( X )=( b 0 , b 1 , b 2 , I ), where b 0 represents the number of connected components, b 1 represents the number of one- dimensional holes, b 2 the number of two-dimensional holes, and so forth. An important property of Betti sequences is that if two objects are topologically equiv- alent (they can be deformed into each other) they share the same Betti sequence. One must note, as we will shortly illustrate, that the reverse is not always true: two objects can be different but have the same Betti sequence. " -- http://www.ayasdi.com/_downloads/Topological_Analysis_of_Population_Activity_in_Visual_Cortex.pdf
total function vs (non-total) partial function
computable vs enumerable vs not enumerable
primitive recursion, mu-recursion/turing machines/lambda calc/etc
universality, interpreters
various machine models, polynomial time emulators
halting problem
Peano arithmetic is sometimes taken to be the definition of the natural numbers. The second-order formulation of Peano arithmetic is quite simple and intuitive; see [7].
Peano arithmetic is incomplete (Gödel's First Incompleteness Theorem (1931)) (there are statements about the natural numbers that are true, but that are unprovable from the Peano axioms; in fact, there is no decidable set of axioms that can lead to proofs of all and only the true statements about the natural numbers (so-called true arithmetic)).
Peano arithmetic was first given in a second-order form (with quantification over sets) with finite axioms, but later was reformulated into a first-order form (also called an 'elementary' form; "In mathematical logic, an elementary theory is one that involves axioms using only finitary first-order logic, without reference to set theory or using any axioms which have consistency strength equal to set theory." -- [8] ) at the cost of (a) having an infinite number of axioms (an axiom schema) instead of a finite number, and (b) some basic properties of addition and multiplication must be included in the first-order axioms (in second-order Peano arithmetic, these properties could be deduced [9]), (c) the first-order ('elementary') Peano arithmetic has non-standard models (the second-order Peano arithmetic has only one model, up to isomorphism).
The restriction of (first-order, I assume?) Peano arithmetic to only addition (no multiplication) is decidable, and similarly the restriction to just multiplication (no addition) is decidable. [10].
Links:
Note: is this system different from and stronger than the second-order formulation of Peano arithmetic? it seems to me, yes, because of comprehension axioms, but i'm not sure; [11] makes it sound like Z_2 is just second-order Peano.
Links:
modus ponens/syllogism
direct proof
proof by cases
proof by contrapositive
proof by contradiction
proof by cancellation
proof by symmetry
perhaps see also http://www.people.vcu.edu/~rhammack/BookOfProof/BookOfProof.pdf , http://mail.csufresno.edu/~mnogin/math145fall05/math145fall05-book.pdf for more
how to prove the nonexistence of something: show that every possibility has a set of properties that together preclude them from being the target contradiction diagonalization
(algebra)
didn't we already do this above?
over well-ordered sets
well-founded ness? see https://en.wikipedia.org/wiki/Mathematical_induction#Heuristic_justification vs https://en.wikipedia.org/wiki/Transfinite_induction
complete/strong induction
and extending it even to uncountables
and, or, not nand/nor
refer to http://boole.stanford.edu/cs353/handouts/book3.pdf for Post's theorem: (Post) A necessary and sufficient condition that a set of operations form a basis for the nonzeroary Boolean operations is that it contain operations that are respectively nonaffine, nonmonotone, nonstrict, noncostrict, and nonselfdual
Category theory can be seen as arising from (at least) 3 applications:
todo
constructions of the real numbers
continuity
what else?
derivatives
integrals
defining curves and shapes according to how the slope of the path that is traced out if the shape were drawn one point at a time (eg you can specify a ramp as 'put your pencil on the paper, go straight to the right for awhile, then the path turns upwards so that for every unit of space traversed horizontally, the path rises two units of space vertically')
formulas relating the amount of change in one variable to the amount of change in another
many applications of calculus to dynamics involve looking at prcoesses via their 'shapes' if time were a spatial axis; for instance, if you throw a ball vertically into the air, and then graph it's position on the vertical axis and time on the horizontal, then it'll trace out a slanted line, and the velocity of the ball is the slope of the line.
Linear algebra is about:
connecting these:
todo
div, grad, curl
change-of-variables, hessian
Informally, a set of strategies is a Nash equilibrium if no player can do better by unilaterally changing their strategy.
Nash's Existence Theorem: if we allow mixed strategies, then every game with a finite number of players in which each player can choose from finitely many pure strategies has at least one Nash equilibrium. [12]
roots closed form solution thru quartics non-closed form solution for some quintics
taylor series expansion
todo other elementary school algebra II topics, what are they?
"A polynomial is uniquely determined by its roots ((and their multiplicities)) and its value at any single point which is not a root." [13] [14] [15]
"The fundamental theorem of algebra": "every non-constant single-variable polynomial with complex coefficients has at least one complex root." [16]
The polynomial "factor theorem": "a polynomial f(x) has a factor (x - k) if and only if f(k)=0 (i.e. k is a root)" [17]
A polynomial p of degree n "has n roots in the complex plane, if they are counted with their multiplicities." [18]
"that two polynomials of the same degree n with n+1 points in common are the same." [19]
logs
the approximation for log(1+x) for small x
properties of e
intution behind e^(i*pi) = -1
Recursion theory is an old-fashioned name for computability theory. It deals with questions like "what is computable?". It is the branch of mathematics that sprang up around Church and Turing's initial work. Although it deals with computation, it has a logic-y flavor, rather than a computer-sciencey-flavor.
A question is "computable" if you can write a computer program that answers the question (within finite time).
The definition of "if you can write a computer program that answers the question" is rigorously defined in recursion theory. There are multiple, provably equivalent ways of defining this, based on how you define the "computer" and what kind of "programs" it takes; such definitions are called "models of computation". One model of computation is the Turing machine. Another is Church's lambda calculus. Anything which is provably equivalent to the Turing machine is called "Turing-equivalent". The Church-Turing hypothesis is that the formal notion of Turing-equivalence is equivalent to the informal notion of a procedure which can be precisely specified and, for any given input, either accomplished with finite resources, or which does not terminate (note: this informal phrasing is my own and may be wrong).
The "finite resources" part is confusing because, since the amount of resources needed may be different for different inputs, in general there is no upper bound on the amount of resources needed. For example, an ideal Turing machine can execute programs that use an arbitrarily large amount of memory. We say that real "computers" are "Turing machines" but technically speaking they are not, because they have finite memory. However, they are extremely similar in spirit; a Turing machine can be said to be an ideal computer with infinite memory.
Another model of computation is Herbrand-Godel's notion of "general recursion", which first defines something called "primitive recursion" and then extends it to "general recursion" (now you see why this field is named "recursion theory", because that's what some of them called computation back in the beginning.
Incidentally "primitive recursion" can generate almost any practically useful function (an example of a function it can't do is a function called Ackerman's function), but it is somewhat difficult to write programs in the style of primitive recursion (for example, you can't have 'while' loops), so most general-purpose programming languages would not fit the primitive recursive constraints.
Here's an example of something that isn't computable. The "halting problem". Given a computer program, tell me if this program will hang (get caught in an infinite loop) or terminate.
The proof that the halting problem isn't computable is that, if there were an algorithm that, given any program, would say if that program hangs, then you could write a wrapper around that algorithm that said, if the input program would hang, then terminate; but if the input program would terminate, then go into an infinite loop. Now run this program, and as input give it itself. If it computes that it will terminate, then it hangs; and if it computes that it will hang, then it terminates.
A set is computable if, given any potential element, you can compute whether or not that element is in the set.
A synonym of computable is "decidable". This sounds better when the question being answered is a yes/no question. So, the halting problem isn't decidable. A set is decidable if you can decide (using some computer program, within finite time) whether any given element belongs.
If something isn't decidable, it may still be "recursively enumerable". A set is recursively enumerable if there is some computer program which prints out every element of the set one by one (although the set may be infinitely large, as long as any given element will be printed at some finite point in time, it's recursively enumerable).
A synonym of "recursively enumerable" is "semidecidable". If someone asks you if element e is in set S, and S is not decidable, but it is semidecidable, then one thing you can do is run the program that prints out all of the elements of S. If e ever comes up, then you can answer, "yes, it's in the set", but if e never comes up, then you'll be waiting forever. Implementing this strategy in a computer program, you get a program which terminates if the input is in set S, and does not terminate if the input is not in set S.
A computer program which never hangs, no matter what input is given, is called a 'total function'. A computer program which might hang on at least one possible input is called a 'partial function'. This terminology makes sense if whenever the program hangs on some input, you say, "the output corresponding to that input is undefined". The picture is that for total functions, every input has a defined output, but for a partial functions, some inputs have no defined output; partial functions only give you defined outputs for part of the space of inputs.
If you have an 'oracle', that is, a magic box that will evaluate an uncomputable function (such as the halting problem) for you, now what else can you compute, and what is uncomputable (technically speaking, the questions and answers are usually taken to be phrased in terms of the oracle corresponding to an uncomputable (non-recursive) set of natural numbers, and for any natural number you can ask the oracle, "Is this number in your set?")? The "now what else can you compute" is called "relative computability". These sort of questions induce a hierarchy of sets called Turing degrees. A lot of work in recursion theory is results on the structure of Turing degrees. Depending on your philosophical beliefs you might find this work irrelevant.
There are attempts to redo the rest of mathematics while only allowing computable sets. These go by names like as "computable mathematics", "recursive mathematics", "computable model theory", "recursive analysis", etc.
interestingly, one NP problem is finding a proof for a given proposition, given an upper limit on proof length. Therefore, the P vs NP problem is very fundamental.
'Model checking' refers to exploring whether a quantified proposition is true or false by going thru every possible combination of assignments to bound variables and seeing if there are any falsifying assignments. This can only be done if the types of the variables have only a finite number of values (eg it would take an infinite amount of time to check every integer).
some other ppl's comments on fields of study:
" In order to go back to a time when mathematics and physics had not yet diverged, you probably have to return to the days of Euler, or even Newton. If you read William Dunham's excellent book, "The Calculus Gallery", in the very first chapter, you will find a discussion of Newton's very basic work on binomial series--something that physicists use almost every day. However, as that book progresses (it goes in chronological order), once you get to Cauchy, who worked in the early 19th century, you begin to see mathematicians turn their focus to questions that no longer address computational questions that could possibly find direct use by physicists, but instead are more or less exist only in the minds of mathematicians.
There certainly is a branch of real analysis, called "classical analysis", which tends to focus more on concrete examples of infinite series, which of course have roots in basic calculus and physics. You can in fact prove a great deal of interesting things about specific infinite series, but you will not invent algebraic geometry or abstract algebra, or be able to fully appreciate the scope of modern 20th century mathematics, if you confine yourself to studying the properties of just one concrete object. If you do, though, you will eventually find yourself studying complex analysis. Somebody who takes this route--that is, to reject the abstract flavor of 20th mathematics--can learn a great deal about mathematics that also happens to be very useful to physicists. On the other hand, you would be missing out on a great deal of fascinating connections between the abstract mathematics of the 20th century, and applications to problems in computer science.
On the other hand, if you really want to study pure mathematics, using proofs, you'll have to be somewhat patient before you can see applications to physics. These applications will come sporatically. Right away, linear algebra is an example of a very basic pure mathematics subject which is absolutely essential to physics. At about the same level is calculus and infinite series. Then, in mechanics, you will probably encounter what physicists call "analytical mechanics", which is an application of a pure mathematics subject called the "calculus of variations". Your understanding of this subject doesn't have to be very deep to start using it in physics, though. At a slightly higher level is complex analysis, which vastly improves your understanding of infinite series. The next application I can think of is quite a bit more advanced; it concerns general relativity, and could be thought of as an advanced setting for multivariable calculus, but which is inspired by the beauty of Euclidian geometry. I am talking about what used to be called "advanced calculus", but now has many different names. The key object of study is what is called a "manifold". A mathematician would call the subject "differential geometry", whereas a physicist would emphasize the use of objects called tensors. At this point, you will begin to see non-trivial mathematics being used in physics, but with the annoyance that physicists continue to rely on computations rather than complete proofs."
"Some major fields of mathematics are:
1. Algebra. This is like and unlike what you may call algebra today. It is the study of how things are built and decomposed. Indeed, it notes that many "things" can be described entirely in terms of how they are built and decomposed. It is often a good place to begin for programmers as it espouses a way of thinking about the world not dissimilar to the way we model domains in while programming. Some books include Algebra: Chapter 0 by Aluffi and Algebra by MacLane?