This subject is a core unit for the statistics major, but is also taken by people from other disciplines who have a less strong mathematics background. It is a fairly thorough look at the theory behind linear models, proving all of the major results from first principles: deriving the least squares estimator, showing that it is the best possible unbiased estimator, and then moving on to confidence intervals and hypothesis testing. In other words, looking at how and why statistical computer packages work behind the scenes.
The proofs involve a lot of linear algebra and are mostly quite tedious - pages and pages of algebraic manipulation, and applying just about every little fact you would have forgotten from first year (well, mostly the Spectral Theorem). The style of the lectures is incredibly dry, with little to motivate the theory being developed, so it's easy to find yourself daydreaming or falling asleep. There's a final week and a half covering the design of experiments, which could easily have done with twice as many lectures, at the expense of some of the more theoretical stuff.
The assignments are all very easy, involving very little in the way of proofs and a whole heap of rote application of formulas. The exams have been getting increasingly theory-based over the years, with this year's being about 50% linear algebra proofs, and 50% rote calculation and interpretation of data.