Berkeley CS 61C Lecture 17

Here are some notes and corrections on this lecture:

Each note begins with a time; "ca." in front of a time means that it is approximate.

This lecture covers two topics:

  1. performance -- this topic is covered in chapter 1 in the book
  2. floating-point numbers -- this topic is covered in chapter 3 in the book
    • if you understand scientific notation (chemistry lab, physics lab) you understand floating point
    • he briefly talks about fixed-point numbers; this is just to motivate the term floating point
    • he sometimes uses the term mantissa when he means significand
    • MARS has a tool that converts between floating-point format and the way we write such numbers on paper
    • there's a similar tool here: possibly useful stuff
    • ca. 57:00 -- he talks here about the hidden-bit trick without using that term for it
      • the hidden-bit trick is akin to the times-4 trick in its purpose, which is to make more efficient use of the available bits (available bits being a scarce resource, that's a good thing; it also gives us a great source of quiz questions, which is also a good thing)
      • the hidden-bit trick is also akin to the times-4 trick in its method, which is to leave out of the representation those bits whose values we know anyway (2 zeroes on the right in the times-4 trick and 1 one on the left in the hidden-bit trick)
      • notice that the designers valued 1 extra bit enough that they went to the trouble of using this trick
    • Appendix A has a big section of floating-point instructions
    • these instructions are executed by a separate processor: Coprocessor 1