• @IAm_A_Complete_Idiot@sh.itjust.works
          link
          fedilink
          7
          edit-2
          8 months ago

          According to the benchmark in the article it’s already way faster at n = 1000. I think you’re overestimating the cost of multiplication relative to just cutting down n logarithmically.

          log_2(1000) = roughly a growth factor of 10. 2000 would be 11, and 4000 would be 12. Logs are crazy.

          • @cbarrick@lemmy.world
            cake
            link
            fedilink
            English
            28 months ago

            The article is comparing to the dynamic programming algorithm, which requires reading and writing to an array or hash table (the article uses a hash table, which is slower).

            The naive algorithm is way faster than the DP algorithm.

            • @t_veor@sopuli.xyz
              link
              fedilink
              English
              28 months ago

              It’s not that hard to check yourself. Running the following code on my machine, I get that the linear algebra algorithm is already faster than the naive algorithm at around n = 100 or so. I’ve written a more optimised version of the naive algorithm, which is beaten somewhere between n = 200 and n = 500.

              Try running this Python code on your machine and see what you get:

              import timeit
              
              def fib_naive(n):
                  a = 0
                  b = 1
                  while 0 < n:
                      b = a + b
                      a = b - a
                      n = n - 1
                  return a
              
              def fib_naive_opt(n):
                  a, b = 0, 1
                  for _ in range(n):
                      a, b = b + a, b
                  return a
              
              def matmul(a, b):
                  return (
                      (a[0][0] * b[0][0] + a[0][1] * b[1][0], a[0][0] * b[0][1] + a[0][1] * b[1][1]),
                      (a[1][0] * b[0][0] + a[1][1] * b[1][0], a[1][0] * b[0][1] + a[1][1] * b[1][1]),
                  )
              
              def fib_linear_alg(n):
                  z = ((1, 1), (1, 0))
                  y = ((1, 0), (0, 1))
                  while n > 0:
                      if n % 2 == 1:
                          y = matmul(y, z)
                      z = matmul(z, z)
                      n //= 2
              
                  return y[0][0]
              
              def time(func, n):
                  times = timeit.Timer(lambda: func(n)).repeat(repeat=5, number=10000)
                  return min(times)
              
              for n in (50, 100, 200, 500, 1000):
                  print("========")
                  print(f"n = {n}")
                  print(f"fib_naive:\t{time(fib_naive, n):.3g}")
                  print(f"fib_naive_opt:\t{time(fib_naive_opt, n):.3g}")
                  print(f"fib_linear_alg:\t{time(fib_linear_alg, n):.3g}")
              

              Here’s what it prints on my machine:

              ========
              n = 50
              fib_naive:      0.0296
              fib_naive_opt:  0.0145
              fib_linear_alg: 0.0701
              ========
              n = 100
              fib_naive:      0.0652
              fib_naive_opt:  0.0263
              fib_linear_alg: 0.0609
              ========
              n = 200
              fib_naive:      0.135
              fib_naive_opt:  0.0507
              fib_linear_alg: 0.0734
              ========
              n = 500
              fib_naive:      0.384
              fib_naive_opt:  0.156
              fib_linear_alg: 0.112
              ========
              n = 1000
              fib_naive:      0.9
              fib_naive_opt:  0.347
              fib_linear_alg: 0.152
              
  • @germanatlas
    link
    28 months ago

    I’ll just stick with phi, thank you