Skip to content

Commit

Permalink
04/06/2020 lecture
Browse files Browse the repository at this point in the history
  • Loading branch information
edomora97 committed Jun 4, 2020
1 parent 6ff3aec commit 5512725
Show file tree
Hide file tree
Showing 2 changed files with 151 additions and 0 deletions.
1 change: 1 addition & 0 deletions MIDA2.tex
Original file line number Diff line number Diff line change
Expand Up @@ -39,5 +39,6 @@
\input{lectures/2020-05-25.tex}
\input{lectures/2020-05-27.tex}
\input{lectures/2020-06-03.tex}
\input{lectures/2020-06-04.tex}

\end{document}
150 changes: 150 additions & 0 deletions lectures/2020-06-04.tex
Original file line number Diff line number Diff line change
@@ -0,0 +1,150 @@
\chapter{Appendix: Discretization of an analog system}
\newlecture{Sergio Savaresi}{04/06/2020}

\missingfigure{Fig1}

\subsection*{A to D converter}

\missingfigure{Fig2}

\begin{description}
\item[Time discretization] $\Delta T$ is the sampling time
\item[Amplitude discretization] Number of levels used for discretization, e.g. \emph{10-bit discretization} uses $2^{10}$ levels of amplitude
\end{description}

An high quality of A/D converter:
\begin{itemize}
\item Can use small $\Delta T$
\item High number of levels (16-bits)
\end{itemize}


\subsection*{D to A converter}

\missingfigure{Fig3}

If $\Delta T$ is sufficiently small, the step-wise analog signal is very similar to a smooth analog signal.

\missingfigure{Fig4}

What is the model of the Digital Perspective?

\missingfigure{Fig5}

\begin{itemize}
\item We make B.B. system identification from measured data: directly estimate a discrete-time model
\item We have a physical W.B. model (continuous time), we need to discretize it
\end{itemize}

The most used approach is the State-Space Transformation.

\[
S: \begin{cases}
\dot{x} = Ax + bu \\
y = Cx + (Du)
\end{cases}
\qquad
\text{sampling time $\Delta T$}
\qquad
S: \begin{cases}
x(t+1) = Fx(t) + Gu(t) \\
y(t) = Hu(t) + (Du(t))
\end{cases}
\]

Transformation formulas:
\begin{align*}
F &= e^{A\Delta T} \\
G &= \int_0^{\Delta T} e^{A\delta}B\, d\delta \\
H &= C \\
D &= D
\end{align*}

\begin{remark}
How the poles of the continuous time system are transformed?

Can be proved that the eigenvalues (poles) follow the \emph{sampling transformation rule}.
\[
z = e^{s\Delta T} \qquad \lambda_F = e^{\lambda_A \Delta T}
\]

\missingfigure{Fig6}

How the zeros in of $S$ in continuous time are transformed into zeros in discrete time?

Unfortunately there is no simple rule like the poles. We can only say:
\[
G(s) = \frac{\text{polynomial in $s$ with $h$ zeros}}{\text{polynomial in $s$ with $k$ poles}} \qquad \text{if $G(s)$ is strictly proper: } k > h
\]
\[
G(z) = \frac{\text{polynomial in $z$ with $k-1$ zeros}}{\text{polynomial in $z$ with $k$ poles}} \qquad \text{$G(z)$ with relative degree 1}
\]

We have new $k-h-1$ zeros that are generated by the discretization.
They are called \emph{hidden zeros}.

Unfortunately these hidden zeros are frequently outside the unit circle, which means that $G(z)$ is not minimum phase even if $G(s)$ is minimum phase.

We need for instance GMVC to design the control system.
\end{remark}

Another simple discretization technique frequently used is the discretization of time-derivative $\dot{x}$.

\begin{align*}
\text{\textbf{eulero backward}} &\qquad \dot{x} \approx \frac{x(t)-x(t-1)}{\Delta T} = \frac{x(t)-z^{-1}x(t)}{\Delta T} = \frac{z-1}{z\Delta T} x(t) \\
\text{\textbf{eulero forward}} &\qquad \dot{x} \approx \frac{x(t+1)-x(t)}{\Delta T} = \frac{zx(t)-x(t)}{\Delta T} = \frac{z-1}{\Delta T} x(t)
\end{align*}

General formula
\[
\dot{x}(t) = \left[ \frac{z-1}{\Delta T} \frac{1}{\alpha z + (1-\alpha)} \right]x(t) \qquad \text{with } 0 \le \alpha \le 1
\]
\begin{itemize}
\item if $\alpha = 0$ it's Eulero Forward
\item if $\alpha = 1$ it's Eulero Backward
\item if $\alpha = \frac{1}{2}$ it's Tustin method
\end{itemize}

The critical choice is $\Delta T$ (sampling time).
The general intuitive rule is: the smaller $\Delta T$, the better.

\missingfigure{Fig7}

If $\Delta T$ is smaller, $\omega_S$ larger

\missingfigure{Fig8}

\[
\Delta T \rightarrow f_S = \frac{1}{\Delta T} \qquad \omega_S = \frac{2\pi}{\Delta T} \qquad f_N = \frac{1}{2} f_S \qquad \omega_N = \frac{1}{2} \omega_S
\]

Hidden problems of a too-small $\Delta T$:
\begin{itemize}
\item Sampling devices (A/D and D/A) cost
\item Computational cost: update an algorithm every $1 \mu s$ is much heavier than every $1 ms$
\item Cost of memory (if data logging is needed)
\item Numerical precision \emph{cost} (hidden computational cost)
\end{itemize}

\missingfigure{Fig9}

If $\Delta T$ is very small (tends to zero), we squeeze all the poles very closed to $(1,0)$. We need very high numerical precision (use a lot of digits) to avoid instability.

Rule of thumb of control engineers: $f_S$ is between 10 and 20 times the system bandwidth we are interested in.

\missingfigure{Fig10}

\begin{remark}[Another way of managing the choice of $\Delta T$ w.r.t. the aliasing problem]
\missingfigure{Fig11}

The classical way to deal with aliasing is to use anti-alias analog filters

\missingfigure{Fig12}

For example, if A/D is at $1KHz$ ($\Delta T = 1ms$)
\missingfigure{Fig13}

\paragraph{Full Digital Approach} without analog anti-alias filter

\missingfigure{Fig14}
\end{remark}

0 comments on commit 5512725

Please sign in to comment.