archimedes.jvp¶
- archimedes.jvp(func: Callable, name: str | None = None, static_argnums: int | Sequence[int] | None = None, static_argnames: str | Sequence[str] | None = None) Callable ¶
Create a function that evaluates the Jacobian-vector product of
func
.Transforms a function into a new function that efficiently computes the product of the Jacobian matrix of
func
with a given vector, using forward-mode automatic differentiation.- Parameters:
func (callable) – The function to differentiate. If not already a compiled function, it will be compiled with the specified static arguments.
name (str, optional) – Name for the created JVP function. If
None
, a name is automatically generated based on the primal function’s name.static_argnums (tuple of int, optional) – Specifies which positional arguments should be treated as static (not differentiated or traced symbolically). Only used if
func
is not already a compiled function.static_argnames (tuple of str, optional) – Specifies which keyword arguments should be treated as static. Only used if
func
is not already a compiled function.
- Returns:
A function with signature
jvp_fun(x, v)
that computes \(J(x) \cdot v\), where \(J(x)\) is the Jacobian offunc
evaluated at \(x\), and \(v\) is the vector to multiply with. The function returns a vector with the same shape as the output offunc
.- Return type:
callable
Notes
When to use this function:
When you need directional derivatives along a specific vector
When computing sensitivities for functions with many outputs and few inputs
When the full Jacobian matrix would be too large to compute or store efficiently
In iterative algorithms that require repeated Jacobian-vector products
Conceptual model:
The Jacobian-vector product (JVP) computes the directional derivative of a function in the direction of a given vector, without explicitly forming the full Jacobian matrix. For a function \(f: R^n \rightarrow R^m\) and a vector \(v \in R^n\), the JVP is equivalent to \(J(x) \cdot v\), where \(J(x)\) is the \(m \times n\) Jacobian matrix at point \(x\).
Forward-mode automatic differentiation computes JVPs efficiently, with a computational cost similar to that of evaluating the original function, regardless of the output dimension. This makes JVP particularly effective for functions with few inputs but many outputs.
The JVP also represents how a small change in the input (in the direction of \(v\)) affects the output of the function, making it useful for sensitivity analysis.
Edge cases:
Raises
ValueError
if the function does not return a single vector-valued arrayThe vector
v
must have the same shape as the inputx
Examples
>>> import numpy as np >>> import archimedes as arc >>> >>> # Example: JVP of a simple function >>> def f(x): >>> return np.array([x[0]**2, x[0]*x[1], np.exp(x[1])], like=x) >>> >>> # Create the JVP function >>> f_jvp = arc.jvp(f) >>> >>> # Evaluate at a point >>> x = np.array([2.0, 1.0]) >>> v = np.array([1.0, 0.5]) # Direction vector >>> >>> # Compute JVP: J(x)·v >>> auto_jvp = f_jvp(x, v) >>> print(auto_jvp) [4. 2. 1.35914091] >>> >>> # Compare with direct computation of Jacobian >>> manual_jvp = arc.jac(f)(x) @ v >>> print(np.allclose(auto_jvp, manual_jvp)) True >>> >>> # Example: Efficient sensitivity analysis for a high-dimensional output >>> def high_dim_func(params): >>> # Function with few inputs but many outputs. >>> return np.sin(np.sum(np.outer(params, np.arange(1000)), axis=0)) >>> >>> # JVP is much more efficient than computing the full Jacobian >>> sensitivity = arc.jvp(high_dim_func) >>> params = np.array([0.1, 0.2]) >>> direction = np.array([1.0, 0.0]) # Sensitivity in first parameter direction >>> >>> # Compute how output changes in the direction of the first parameter >>> output_sensitivity = sensitivity(params, direction) >>> print(output_sensitivity.shape) (1000,)