Computergestützte Mathematik zur Analysis¶

Vorlesung vom 22.12.2022

© 2022 Prof. Dr. Rüdiger W. Braun

In [1]:
from sympy import *
init_printing()

Lineare Gleichungssysteme¶

In [2]:
A = Matrix(3,3,range(1,10))
A
Out[2]:
$\displaystyle \left[\begin{matrix}1 & 2 & 3\\4 & 5 & 6\\7 & 8 & 9\end{matrix}\right]$
In [3]:
b = Matrix([6,15,24])
b
Out[3]:
$\displaystyle \left[\begin{matrix}6\\15\\24\end{matrix}\right]$
In [4]:
x_var = symbols('x1:4')
x = Matrix(x_var)
x
Out[4]:
$\displaystyle \left[\begin{matrix}x_{1}\\x_{2}\\x_{3}\end{matrix}\right]$
In [5]:
glg = Eq(A*x, b)
glg
Out[5]:
$\displaystyle \left[\begin{matrix}x_{1} + 2 x_{2} + 3 x_{3}\\4 x_{1} + 5 x_{2} + 6 x_{3}\\7 x_{1} + 8 x_{2} + 9 x_{3}\end{matrix}\right] = \left[\begin{matrix}6\\15\\24\end{matrix}\right]$
In [6]:
lsg = solve(glg)
lsg
Out[6]:
$\displaystyle \left\{ x_{1} : x_{3}, \ x_{2} : 3 - 2 x_{3}\right\}$
In [7]:
lsg = solve(glg, {x[1],x[2]})
lsg
Out[7]:
$\displaystyle \left\{ x_{2} : 3 - 2 x_{1}, \ x_{3} : x_{1}\right\}$

Also $$ \{ x \in \mathbb R^3 \mid Ax = b \} = \left\{ \left. \begin{pmatrix} x_1 \\ 3 - 2x_1 \\ x_1 \end{pmatrix} \right| x_1 \in \mathbb R \right\} $$

In [8]:
A.det()
Out[8]:
$\displaystyle 0$

Probe:

In [10]:
v  = x.subs(lsg)
v
Out[10]:
$\displaystyle \left[\begin{matrix}x_{1}\\3 - 2 x_{1}\\x_{1}\end{matrix}\right]$
In [11]:
A * v - b
Out[11]:
$\displaystyle \left[\begin{matrix}0\\0\\0\end{matrix}\right]$

Konkrete Lösung

In [12]:
v.subs(x[0], 0)
Out[12]:
$\displaystyle \left[\begin{matrix}0\\3\\0\end{matrix}\right]$

Gleichung ohne Lösung

In [13]:
b = Matrix([0,2,3])
glg = Eq(A*x, b)
solve(glg)
Out[13]:
$\displaystyle \left[ \right]$

Kern von $A$

In [14]:
glg = Eq(A*x, 0)
glg
Out[14]:
$\displaystyle \text{False}$
In [15]:
glg = Eq(A*x, 0, evaluate=False)
glg
Out[15]:
$\displaystyle \left[\begin{matrix}x_{1} + 2 x_{2} + 3 x_{3}\\4 x_{1} + 5 x_{2} + 6 x_{3}\\7 x_{1} + 8 x_{2} + 9 x_{3}\end{matrix}\right] = 0$
In [17]:
#solve(glg)  # TypeError
In [18]:
null = Matrix([0,0,0])
glg = Eq(A*x, null)
glg
Out[18]:
$\displaystyle \left[\begin{matrix}x_{1} + 2 x_{2} + 3 x_{3}\\4 x_{1} + 5 x_{2} + 6 x_{3}\\7 x_{1} + 8 x_{2} + 9 x_{3}\end{matrix}\right] = \left[\begin{matrix}0\\0\\0\end{matrix}\right]$
In [19]:
solve(glg)
Out[19]:
$\displaystyle \left\{ x_{1} : x_{3}, \ x_{2} : - 2 x_{3}\right\}$
In [20]:
A.nullspace()
Out[20]:
$\displaystyle \left[ \left[\begin{matrix}1\\-2\\1\end{matrix}\right]\right]$

Basis des Kerns

Eigenwerte und Eigenvektoren¶

In [21]:
A
Out[21]:
$\displaystyle \left[\begin{matrix}1 & 2 & 3\\4 & 5 & 6\\7 & 8 & 9\end{matrix}\right]$
In [22]:
A.eigenvals()
Out[22]:
$\displaystyle \left\{ 0 : 1, \ \frac{15}{2} - \frac{3 \sqrt{33}}{2} : 1, \ \frac{15}{2} + \frac{3 \sqrt{33}}{2} : 1\right\}$
In [23]:
A.eigenvects(simplify=True)
Out[23]:
$\displaystyle \left[ \left( 0, \ 1, \ \left[ \left[\begin{matrix}1\\-2\\1\end{matrix}\right]\right]\right), \ \left( \frac{15}{2} - \frac{3 \sqrt{33}}{2}, \ 1, \ \left[ \left[\begin{matrix}- \frac{3 \sqrt{33}}{22} - \frac{1}{2}\\\frac{1}{4} - \frac{3 \sqrt{33}}{44}\\1\end{matrix}\right]\right]\right), \ \left( \frac{15}{2} + \frac{3 \sqrt{33}}{2}, \ 1, \ \left[ \left[\begin{matrix}- \frac{1}{2} + \frac{3 \sqrt{33}}{22}\\\frac{1}{4} + \frac{3 \sqrt{33}}{44}\\1\end{matrix}\right]\right]\right)\right]$

Die Tripel enthalten: Eigenwert, arithmetische Vielfachheit, Basis des Eigenraums

In [24]:
B = Matrix([[1,0,1], [0,1,0], [0,0,1]])
B
Out[24]:
$\displaystyle \left[\begin{matrix}1 & 0 & 1\\0 & 1 & 0\\0 & 0 & 1\end{matrix}\right]$
In [25]:
B.eigenvals()
Out[25]:
$\displaystyle \left\{ 1 : 3\right\}$
In [26]:
B.eigenvects()
Out[26]:
$\displaystyle \left[ \left( 1, \ 3, \ \left[ \left[\begin{matrix}1\\0\\0\end{matrix}\right], \ \left[\begin{matrix}0\\1\\0\end{matrix}\right]\right]\right)\right]$

arithmetische > geometrische Vielfachheit $\Rightarrow$ Matrix nicht diagonalisierbar

Jordanform¶

In [27]:
B
Out[27]:
$\displaystyle \left[\begin{matrix}1 & 0 & 1\\0 & 1 & 0\\0 & 0 & 1\end{matrix}\right]$
In [28]:
T, J = B.jordan_form()
T, J
Out[28]:
$\displaystyle \left( \left[\begin{matrix}1 & 0 & 0\\0 & 0 & 1\\0 & 1 & 0\end{matrix}\right], \ \left[\begin{matrix}1 & 1 & 0\\0 & 1 & 0\\0 & 0 & 1\end{matrix}\right]\right)$
In [29]:
T * J * T**(-1)
Out[29]:
$\displaystyle \left[\begin{matrix}1 & 0 & 1\\0 & 1 & 0\\0 & 0 & 1\end{matrix}\right]$
In [30]:
C = Matrix(3, 3, [-4, -2, -3, 5, 3, 3, 5, 2, 4])
C
Out[30]:
$\displaystyle \left[\begin{matrix}-4 & -2 & -3\\5 & 3 & 3\\5 & 2 & 4\end{matrix}\right]$
In [31]:
T, J = C.jordan_form()
T, J
Out[31]:
$\displaystyle \left( \left[\begin{matrix}-5 & 1 & - \frac{2}{5}\\5 & 0 & 1\\5 & 0 & 0\end{matrix}\right], \ \left[\begin{matrix}1 & 1 & 0\\0 & 1 & 0\\0 & 0 & 1\end{matrix}\right]\right)$
In [32]:
C.eigenvects()
Out[32]:
$\displaystyle \left[ \left( 1, \ 3, \ \left[ \left[\begin{matrix}- \frac{2}{5}\\1\\0\end{matrix}\right], \ \left[\begin{matrix}- \frac{3}{5}\\0\\1\end{matrix}\right]\right]\right)\right]$
In [33]:
w1 = C.eigenvects()[0][2][0]
w1
Out[33]:
$\displaystyle \left[\begin{matrix}- \frac{2}{5}\\1\\0\end{matrix}\right]$
In [34]:
w2 = C.eigenvects()[0][2][1]
w2
Out[34]:
$\displaystyle \left[\begin{matrix}- \frac{3}{5}\\0\\1\end{matrix}\right]$
In [35]:
5*w1 + 5*w2, T.col(0)
Out[35]:
$\displaystyle \left( \left[\begin{matrix}-5\\5\\5\end{matrix}\right], \ \left[\begin{matrix}-5\\5\\5\end{matrix}\right]\right)$
In [36]:
T * J * T**(-1) == C
Out[36]:
True

Die Jordansche Normalform hängt unstetig von den Daten ab

In [37]:
delta = Symbol('delta')
C[0,0] += delta
C
Out[37]:
$\displaystyle \left[\begin{matrix}\delta - 4 & -2 & -3\\5 & 3 & 3\\5 & 2 & 4\end{matrix}\right]$
In [38]:
Te, Je = C.jordan_form()
Je
Out[38]:
$\displaystyle \left[\begin{matrix}1 & 0 & 0\\0 & \frac{\delta}{2} - \frac{\sqrt{\delta^{2} - 20 \delta}}{2} + 1 & 0\\0 & 0 & \frac{\delta}{2} + \frac{\sqrt{\delta^{2} - 20 \delta}}{2} + 1\end{matrix}\right]$
In [39]:
Je.limit(delta, 0), J
Out[39]:
$\displaystyle \left( \left[\begin{matrix}1 & 0 & 0\\0 & 1 & 0\\0 & 0 & 1\end{matrix}\right], \ \left[\begin{matrix}1 & 1 & 0\\0 & 1 & 0\\0 & 0 & 1\end{matrix}\right]\right)$

Normen von Vektoren und Matrizen¶

In [40]:
v = Matrix([1,2,3])
v.norm()
Out[40]:
$\displaystyle \sqrt{14}$
In [42]:
a, b, c, d = symbols("a b c d", real=True)
w = Matrix([a, b, c])
w.norm()
Out[42]:
$\displaystyle \sqrt{a^{2} + b^{2} + c^{2}}$
In [43]:
v.norm(oo)
Out[43]:
$\displaystyle 3$
In [44]:
w.norm(oo)
Out[44]:
$\displaystyle \max\left(\left|{a}\right|, \left|{b}\right|, \left|{c}\right|\right)$
In [47]:
v.norm(Rational(8,7))
Out[47]:
$\displaystyle \left(1 + 2 \cdot \sqrt[7]{2} + 3 \cdot \sqrt[7]{3}\right)^{\frac{7}{8}}$
In [49]:
w.norm(Rational(8,7))
Out[49]:
$\displaystyle \left(\left|{a}\right|^{\frac{8}{7}} + \left|{b}\right|^{\frac{8}{7}} + \left|{c}\right|^{\frac{8}{7}}\right)^{\frac{7}{8}}$
In [50]:
A
Out[50]:
$\displaystyle \left[\begin{matrix}1 & 2 & 3\\4 & 5 & 6\\7 & 8 & 9\end{matrix}\right]$
In [51]:
A.norm()
Out[51]:
$\displaystyle \sqrt{285}$
In [52]:
M = Matrix(2,2,[a,b,c,d])
M
Out[52]:
$\displaystyle \left[\begin{matrix}a & b\\c & d\end{matrix}\right]$
In [53]:
M.norm()
Out[53]:
$\displaystyle \sqrt{a^{2} + b^{2} + c^{2} + d^{2}}$

Das ist die Frobeniusnorm. Sie keine Matrixnorm, aber submultiplikativ, erfüllt also $\Vert AB \Vert \le \Vert A \Vert \, \Vert B \Vert$.

In [54]:
A.norm(2)
Out[54]:
$\displaystyle \sqrt{\frac{3 \sqrt{8881}}{2} + \frac{285}{2}}$
In [55]:
B = A.T*A
B
Out[55]:
$\displaystyle \left[\begin{matrix}66 & 78 & 90\\78 & 93 & 108\\90 & 108 & 126\end{matrix}\right]$
In [56]:
B.eigenvals()
Out[56]:
$\displaystyle \left\{ 0 : 1, \ \frac{285}{2} - \frac{3 \sqrt{8881}}{2} : 1, \ \frac{3 \sqrt{8881}}{2} + \frac{285}{2} : 1\right\}$

Das ist eine Matrixnorm.

In [57]:
M.norm(1)
Out[57]:
$\displaystyle \max\left(\left|{a}\right| + \left|{c}\right|, \left|{b}\right| + \left|{d}\right|\right)$

Spaltensummennorm

In [58]:
M.norm(oo)
Out[58]:
$\displaystyle \max\left(\left|{a}\right| + \left|{b}\right|, \left|{c}\right| + \left|{d}\right|\right)$

Zeilensummennorm

Das sind auch Matrixnormen.

Vektoranalysis¶

In [59]:
x, y, z = symbols('x y z')
variablen = [x,y,z]
In [60]:
f = exp(x**2 + 2*y**2 + 3*z**2)
f
Out[60]:
$\displaystyle e^{x^{2} + 2 y^{2} + 3 z^{2}}$
In [61]:
J = Matrix([f]).jacobian(variablen)
J
Out[61]:
$\displaystyle \left[\begin{matrix}2 x e^{x^{2} + 2 y^{2} + 3 z^{2}} & 4 y e^{x^{2} + 2 y^{2} + 3 z^{2}} & 6 z e^{x^{2} + 2 y^{2} + 3 z^{2}}\end{matrix}\right]$

Das ist die Jacobi-Matrix

In [62]:
nabla_f = J.T
nabla_f
Out[62]:
$\displaystyle \left[\begin{matrix}2 x e^{x^{2} + 2 y^{2} + 3 z^{2}}\\4 y e^{x^{2} + 2 y^{2} + 3 z^{2}}\\6 z e^{x^{2} + 2 y^{2} + 3 z^{2}}\end{matrix}\right]$

Das ist der Gradient

In [63]:
A = Matrix([f, f**2, f**3])
A
Out[63]:
$\displaystyle \left[\begin{matrix}e^{x^{2} + 2 y^{2} + 3 z^{2}}\\e^{2 x^{2} + 4 y^{2} + 6 z^{2}}\\e^{3 x^{2} + 6 y^{2} + 9 z^{2}}\end{matrix}\right]$
In [64]:
A.jacobian(variablen)
Out[64]:
$\displaystyle \left[\begin{matrix}2 x e^{x^{2} + 2 y^{2} + 3 z^{2}} & 4 y e^{x^{2} + 2 y^{2} + 3 z^{2}} & 6 z e^{x^{2} + 2 y^{2} + 3 z^{2}}\\4 x e^{2 x^{2} + 4 y^{2} + 6 z^{2}} & 8 y e^{2 x^{2} + 4 y^{2} + 6 z^{2}} & 12 z e^{2 x^{2} + 4 y^{2} + 6 z^{2}}\\6 x e^{3 x^{2} + 6 y^{2} + 9 z^{2}} & 12 y e^{3 x^{2} + 6 y^{2} + 9 z^{2}} & 18 z e^{3 x^{2} + 6 y^{2} + 9 z^{2}}\end{matrix}\right]$
In [65]:
H = hessian(f, variablen)
H
Out[65]:
$\displaystyle \left[\begin{matrix}4 x^{2} e^{x^{2} + 2 y^{2} + 3 z^{2}} + 2 e^{x^{2} + 2 y^{2} + 3 z^{2}} & 8 x y e^{x^{2} + 2 y^{2} + 3 z^{2}} & 12 x z e^{x^{2} + 2 y^{2} + 3 z^{2}}\\8 x y e^{x^{2} + 2 y^{2} + 3 z^{2}} & 16 y^{2} e^{x^{2} + 2 y^{2} + 3 z^{2}} + 4 e^{x^{2} + 2 y^{2} + 3 z^{2}} & 24 y z e^{x^{2} + 2 y^{2} + 3 z^{2}}\\12 x z e^{x^{2} + 2 y^{2} + 3 z^{2}} & 24 y z e^{x^{2} + 2 y^{2} + 3 z^{2}} & 36 z^{2} e^{x^{2} + 2 y^{2} + 3 z^{2}} + 6 e^{x^{2} + 2 y^{2} + 3 z^{2}}\end{matrix}\right]$
In [66]:
H == H.T
Out[66]:
True

Definitheitsverhalten¶

In [ ]:
x = Symbol("x")
In [ ]:
 
In [ ]:
 
In [ ]:
 
In [67]:
H1 = H.subs({x:1, y:0, z:-1})
H1
Out[67]:
$\displaystyle \left[\begin{matrix}6 e^{4} & 0 & - 12 e^{4}\\0 & 4 e^{4} & 0\\- 12 e^{4} & 0 & 42 e^{4}\end{matrix}\right]$
In [71]:
H1.is_positive_definite
Out[71]:
True
In [72]:
H1.is_indefinite
Out[72]:
False
In [73]:
M = Matrix(2,2,[x,0,0,1])
M
Out[73]:
$\displaystyle \left[\begin{matrix}x & 0\\0 & 1\end{matrix}\right]$
In [74]:
M.is_positive_definite # kann man ohne Kenntnis von $x$ nicht wissen
In [75]:
print(M.is_positive_definite)
None
In [83]:
x = Symbol("x", positive=True)
M = Matrix([[0,x], [x,0]])
M
Out[83]:
$\displaystyle \left[\begin{matrix}0 & x\\x & 0\end{matrix}\right]$
In [88]:
M.is_indefinite
Out[88]:
True
In [ ]:
Dreiwertige Logik
In [ ]:
 

Extremwerte in mehreren Veränderlichen¶

In [92]:
x = Symbol("x")
In [93]:
f = -x**4/2 - x**2*y**2 - y**4/2 + x**3 - 3*x*y**2
f
Out[93]:
$\displaystyle - \frac{x^{4}}{2} + x^{3} - x^{2} y^{2} - 3 x y^{2} - \frac{y^{4}}{2}$
In [94]:
gr = Matrix([f]).jacobian([x,y])
gr
Out[94]:
$\displaystyle \left[\begin{matrix}- 2 x^{3} + 3 x^{2} - 2 x y^{2} - 3 y^{2} & - 2 x^{2} y - 6 x y - 2 y^{3}\end{matrix}\right]$
In [95]:
lsg = solve(gr)
lsg
Out[95]:
$\displaystyle \left[ \left\{ x : - \frac{3}{4}, \ y : - \frac{3 \sqrt{3}}{4}\right\}, \ \left\{ x : - \frac{3}{4}, \ y : \frac{3 \sqrt{3}}{4}\right\}, \ \left\{ x : 0, \ y : 0\right\}, \ \left\{ x : \frac{3}{2}, \ y : 0\right\}\right]$

kritische Punkte

In [96]:
H = hessian(f, [x,y])
H
Out[96]:
$\displaystyle \left[\begin{matrix}- 6 x^{2} + 6 x - 2 y^{2} & - 4 x y - 6 y\\- 4 x y - 6 y & - 2 x^{2} - 6 x - 6 y^{2}\end{matrix}\right]$
In [97]:
H1 = H.subs(lsg[0])
H1
Out[97]:
$\displaystyle \left[\begin{matrix}- \frac{45}{4} & \frac{9 \sqrt{3}}{4}\\\frac{9 \sqrt{3}}{4} & - \frac{27}{4}\end{matrix}\right]$
In [98]:
H1.is_negative_definite
Out[98]:
True
In [99]:
H2 = H.subs(lsg[1])
H2
Out[99]:
$\displaystyle \left[\begin{matrix}- \frac{45}{4} & - \frac{9 \sqrt{3}}{4}\\- \frac{9 \sqrt{3}}{4} & - \frac{27}{4}\end{matrix}\right]$
In [100]:
H2.is_negative_definite
Out[100]:
True
In [101]:
H3 = H.subs(lsg[2])
H3
Out[101]:
$\displaystyle \left[\begin{matrix}0 & 0\\0 & 0\end{matrix}\right]$
In [102]:
H3.is_positive_semidefinite
Out[102]:
True
In [103]:
H3.is_negative_semidefinite
Out[103]:
True

verschieben wir auf später

In [106]:
H4 = H.subs(lsg[3])
H4
Out[106]:
$\displaystyle \left[\begin{matrix}- \frac{9}{2} & 0\\0 & - \frac{27}{2}\end{matrix}\right]$

Direkt zu sehen: negativ definit

zurück zu

In [107]:
lsg[2]
Out[107]:
$\displaystyle \left\{ x : 0, \ y : 0\right\}$
In [105]:
f_x = f.subs(y, lsg[2][y])
f_x
Out[105]:
$\displaystyle - \frac{x^{4}}{2} + x^{3}$

Diese Funktion hat in $0$ einen Sattelpunkt