Thursday, February 8th, 2024

Gratitude makes sense of our past, brings peace for today, and creates a vision for tomorrow.

— Melody Beattie

ECS165A Lecture

  • pin is a latch to secure data structure level; latches are internal control mechanism; see https://stackoverflow.com/a/42464336
  • pin guarantees that while the page is read, the page doesn’t become corrupted (suddenly replaced with another page or written with something else)
  • lock is at the logical
  • Modified pages require setting a dirty bit to true. When evicting dirty pages, they must be written to disk first.
  • For crash recovery, we need to have a write-ahead log to ensure that dirty pages can be recovered if database crashes before the they are saved.

ECS122A Lecture

Greedy Proof Review

  • greedy proof components that will be tested
  • greedy strategy proof (using breakpoint problem as an example)
    • step 1: state the greedy algorithm
      • example: choose the breakpoint furthest (such that the resulting set of breakpoints is smallest) from the current breakpoint that is still reachable (i.e. still has enough fuel left), i.e.
    • step 2: define the optimal solution (i.e. as a set of choices/decisions)
      • example: Let be the optimal solution. Size of is minimum, where (starting point) and
    • step 3: define the “first” greedy choice (let’s refer to it as )
      • example: Let be the greedy choice of the greedy strategy
        • We know (starting from 0)
    • stet 4: prove that a new solution is still optimal
      • Why is ? It’s by definition of the greedy strategy, because it is the furthest possible from 0 that is still under .
      • We know by the definition of optimal solution.
      • We can have (multiply both sides by -1), then ew can add to both sides
      • by transitive property
      • is reachable from by definition of greedy strategy (all under ).
      • is reachable from since
      • So has to be as optimal as .
  • optimal substructure proof (using breakpoint problem as example)

Dynamic Programming

  • memoization: store solutions to smaller problems
  • example: recursive Fibonacci sequence, but recursive calls are cached
  • rod cutting problem
    • given
      • length of a rod we’re trying to sell
      • market prices for rods for lengths from 1 to
    • goal
      • chop the rod into multiple segments such that the total sale revenue is maximum (assuming chopping is free)
    • bruteforce
      • iterate over all possible places to chop

DP solution to rod cutting problem

runtime:

step 1: state problem via recurrence - assume we know the optimal cut for a length is (inches), we know opt(n) = p[x] + opt(n-x) where p is an array of market prices. step 2: state general case of problem:

step 3: bottom-up memoization r[i] is the optimal revenue for a rod with length . We can build this array from 0 to .

Base cases: r[i]=p[1], r[0]=0. Building r[i] takes (iterate from 1 to ). We need to build from r[0] to r[n] (also ) So combined the DP solution takes only .

To actually output the best cuts, we need to have another array s, where s[i] stores the optimal cut for a rod of length . This allows us to trace back all the cuts taken to get from to the most granular. For example, starting at a length of , once s is calculated, we can use s[n] to get the first winning cut, s[n-s[n]] to get the next winning cut, etc.

Proof of correctness of DP algorithms only requires the optimal substructure proof in this class (i.e. subset of an optimal solution is optimal for its corresponding subproblem).

Proof (greedy strategy proof)

Given: opt cut strategy from rod length

Prove: set is optimal cut strategy for rod of length .

Proof by contradiction

Assume is not optimal cut. There exists strategy B that is optimal for rod length . `rev(B) > rev(A)