Alright, so here's how I approached it: We know that when the
price x is 5 bucks, the number of people n is 120 (^1).
def ddx(f,x,h=0.001): return (f(x+h)-f(x-h))/(2*h) # numerical derivative
def newton(f,x0): return x0 - f(x0)/ddx(f,x0) # Newton-Raphson iteration
It's hella rad to see you bust out those "next-level math tricks"
with just a single line each!
It's hella rad to see you bust out those "next-level math tricks"You might like:
with just a single line each!
The "next-level math trick" Newton-Raphson has nothing to do with
functional programming.
Nobody up the thread was claiming it was functional. And you can
totally implement anything in an imperative or functional style.
def f_prime(x: float) -> float:
return 2*x
Before I knew automatic differentiation, I thought neural networks backpropagation was magic. Although coding up backward mode autodiff isdef f_prime(x: float) -> float:
return 2*x
You might enjoy implementing that with automatic differentiation (not
to be confused with symbolic differentiation) instead.
http://blog.sigfpe.com/2005/07/automatic-differentiation.html
The "next-level math trick" Newton-Raphson has nothing to do with
functional programming. I have written solvers in purely iterative
style.
As far as I know, Newton-Raphson is the opposite of functional
programming as you iteratively solve for the root. Functional programming
is stateless where you are not allowed to store any state (current best
guess root).
Off topic, but quick. Are you the paul rubin who I knew back in 70's
in nyc, on E. 84th st?
Sysop: | DaiTengu |
---|---|
Location: | Appleton, WI |
Users: | 991 |
Nodes: | 10 (1 / 9) |
Uptime: | 77:50:45 |
Calls: | 12,949 |
Calls today: | 3 |
Files: | 186,574 |
Messages: | 3,264,609 |