What areas of scientific exploration are left for a "super genius" to arise and shake things up? Many? Few?
The one I am conversing with argues there are none.
There are many interesting problems in computer science, the major ones I can think of are:
P=NP? Algorithms solve computational tasks, but not all algorithms are equal, and not all tasks are computationally equivalent. Algorithmic problems are broadly categorized as P, meaning they can be solved in polynomial time (e.g. O(1), O

, O(n lg n), O(VE), O(V + E), O(n
2)) on a deterministic Turing machine; or NP, meaning the problem cannot be solved in polynomial time, they are solved in exponential time (e.g. O(n!), O(2
n)) on a deterministic Turing machine.
Problems that look superficially similar, like finding the shortest path between two points in an unweighted graph (P) and finding the longest acyclic path between two points in an unweighted graph (NP), may belong in different complexity classes.
Often, one can write an algorithm to convert or "reduce" one kind of problem into another. For example, represent the
vertex cover problem (NP) in terms of the
3-SAT problem (NP). The problem here is that, while one can write an algorithm exists to convert one NP problem into another NP problem in polynomial time, there is no known algorithm which converts an NP problem into a P problem in polynomial time.
One can often verify a solution to an NP problem in polynomial time, but finding a solution requires exponential time (best example I can think of is the
subset sum problem).
Most computer scientists believe that P != NP, but this statement lacks a proof. Whoever proves this statement will be remembered to the end of time.
On the flip side, if someone proved that P = NP (for example, they discovered an algorithm which reduces an NP problem to a P problem in polynomial time, or found a polynomial time solution to a NP problem), that would "shake things up" to say the least
While the P=NP problem is the best known example of whether one complexity class reduces to another, it is also an open question whether
P=BPP,
NP=C,
NL=L, P=PSPACE, P=co-NP, etc.
What is the complexity class of integer factorization? Integer factorization is currently not known to be either P or NP or a separate complexity class altogether. Many prime number sieves run in polynomial time for some
value of n, but require exponential time proportional to the
length of the number (usually in bits) in calculations. We do not know of any algorithm on a classical computer which can factor which can factor a number in polynomial time, meaning an algorithm that runs at least O(length_in_bits
k) time for some constant k.
Factoring using a general number field sieve is known to be sub-exponential time, but still grows faster than any polynomial time algorithm.
Shor's algorithm a quantum algorithm does factors in polynomial time, but because it is a non-deterministic algorithm, it has a (bounded) probability of error. It belongs to a (presumed) superset of P problems known as bounded error quantum polynomial or BQP.
Determining the complexity class of integer factorization is a more specific version of the following class of questions:
What is the fastest algorithm for XYZ? There are many polynomial time algorithms for multiplying of n-bit numbers, matrix multiplication, and solving fast fourier transforms, but it is not known whether faster algorithms exist, nor is it known what the lowerbound solution to these problems are.
Similar to integer factorization, there is no known polynomial time algorithm for computing discrete logarithms, but it is not known whether this problem is P or NP.
Frustratingly,
linear programming has a notoriously difficult complexity analysis. It is known to be polynomial, but not known whether it is
strongly polynomial. A proof showing that there is no strongly polynomial algorithm to solve linear programming problems would imply that it a different complexity class entirely.