I think Return of the Jedi is pretty cool. I always thought that Luke's green lightsaber in that film was awesome for some reason.
Anyway, here's some computational complexity theory that I still remember from studying for my final in that course (which is a graduate-level course at OU) last week:
An example of the translation lemma,
NSPACE(log(n)) ⊆ DSPACE((log(n))^2) --> NSPACE(f(n)) ⊆ DSPACE((f(n))^2) for all space-constructible functions f(n) ≥ log(n).
Proof (by use of a "padding argument"): Let L be any language in NSPACE(f(n)), and let M be a nondeterministic Turing machine that decides L and is space-bounded by f(n).
Define L_f as the "padded" version of L:
L_f = {x#^[2^f(|x|) - |x|] | x ∈ L}
Now, let M_f be a nondeterministic Turing machine that decides L_f and is time-bounded by log_2(n). M_f works as follows on input x:
(1) Verify that its input is of the form x#^[2^f(|x|) - |x|].
(2) Simulates M on symbols of x, ignoring the # symbols.
M_f halts on input x using space log_2(|x| + 2^f(|x|) - |x|) = log_2(2^f(|x|)) = f(|x|).
So, L_f is in NSPACE(log(n)), and because NSPACE(log(n)) ⊆ DSPACE((log(n))^2), L_f is also in DSPACE((log(n))^2), and so there also exists a deterministic Turing machine that decides L_f and is space-bounded by (log_2(n))^2; call it M'_f. Now, let M' be a deterministic Turing machine that decides L by simulating M'_f on input x in the following way:
(1) It simulates M'_f on x when M'_f scans symbols of x.
(2) When M'_f scans # symbols, it uses a counter to keep track of M'_f's tape-head position on the padded portion of its input.
M' halts on input x using space [log_2(|x| + 2^f(|x|) - |x|)]^2 = [log_2(f(|x|))]^2 = [f(|x|)]^2 to simulate M'_f and using f(|x|) space for the counter, so M' is a deterministic Turing machine that is space-bounded by (f(|x|))^2. Hence, L is in DSPACE((f(n))^2), and so we therefore have that NSPACE(f(n)) ⊆ DSPACE((f(n))^2). Q.E.D.