There is a nice discussion about this in Calculus and Analytic Geometry, G. B. Thomas.
Also, Prof. Steven Strogatz discusses the brachistochrone here:
There is a nice discussion about this in Calculus and Analytic Geometry, G. B. Thomas.
Also, Prof. Steven Strogatz discusses the brachistochrone here:
The following is one of the greatest triumphs of pure mathematics — its applications to the glamorous world of IT, finance, science and engineering! Hats off to Prof. Yves Meyer, winner of Abel Prize, 2017:
Reproduced from newspaper, DNA, print edition, Mumbai, Sunday, Mar 5, 2017, (Section on Health):
Biomedical solutions via math:
(A new model combines mathematics with biology to set the stage for cancer cure and other diseases):
Ann Arbor:
How do our genes give rise to proteins, proteins to cells, and cells to tissues and organs? What makes a cluster of cells become a liver or a muscle? The incredible complexity of these biological systems drives the work of biomedical scientists. But, two mathematicians have introduced a new way of thinking that may help set the stage for better understanding of our bodies and other living things.
The pair from University of Michigan Medical School and University of California, Berkeley talk of using math to understand how generic information and interactions between cells give rise to the actual function of a particular type of tissue. While the duo admit that it’s a highly idealized framework, which does not take into account every detail of this process, that’s what’s needed. By stepping back and making a simplified model based on mathematics, they hope to create a basis for scientists to understand the changes that happen over time within and between cells to make living tissues possible. It could also help with understanding of how diseases such as cancer can arise when things don’t go as planned.
Turning to Turing’s machine:
U-M Medical School Assistant Professor of Computational Medicine, Indika Rajapakse and Berkeley Professor Emeritus, Stephen Smale have worked on the concepts for several years. “All the time, this process is happening in our bodies, as cells are dying and arising, and yet, they keep the function of the tissue going,” says Rajapakse. “We need to use beautiful mathematics and beautiful biology together to understand the beauty of a tissue.”
For the new work, they even hearken back to the work of Alan Turing, the pioneering Btitish mathematician famous for his “Turing machine” computer that cracked the codes during World War II.
Toward the end of his life, Turing began looking at the mathematical underpinnings of morphogenesis — the process that allows natural patterns such as a zebra’s stripes to develop as a living thing grows from an embryo to an adult.
“Our approach adapts Turing’s technique, combining genome dynamics within the cell and the diffusion dynamics between cells,” says Rajapakse, who leads the U-M 4D —- Genome Lab in the Department of Computational Medicine and Bio-Informatics.
His team of biologists and engineers conduct experiments that capture human genome dynamics to three dimensions using bio-chemical methods and high resolution imaging.
Bringing math and the genome together
Smale, who retired from Berkeley, but is still active in research, is considered a pioneer of modelling dynamic systems. Several years ago, Rajapakse approached him during a visit to U-M, where Smale earned his undergraduate and graduate degrees. They began exploring how to study the human genome — the set of genes in an organism’s DNA — as a dynamic system.
They based their work on the idea that while the genes of an organism remain the same throughout life, how cells use them does not.
Last spring, they published a paper that lays a mathematical foundation for gene regulation — the process that governs how often and when genes get “read” by cells in order to make proteins.
Instead of the nodes of those networks being static, as Turing assumed, the new work sees them as dynamic systems. The genes may be “hard-wired” into the cell, but how they are expressed depends on factors such as epigenetic tags added as a result of environmental factors, and more.
Next Step:
As a result of his work with Smale, Rajapakse now has funding from the Defense Advanced Research Projects Agency (DARPA), to keep exploring the issue of emergence of function — including what happens when the process changes.
Cancer, for instance, arises from a cell development and proliferation cycle gone awry. And the process by which induced pluripotent stem cells are made in a lab —- essentially turning back the clock on a cell type so that it regains the ability to become other cell types — is another example.
Rajapakse aims to use data from real world genome and cell biology experiments in his lab to inform future work, focused on cancer and cell reprogramming.
He’s also organizing a gathering of mathematicians from around the world to look at computational biology and the genome this summer in Barcelona.
**************************************************************************************
Thanks to DNA, Prof. Stephen Smale and Prof. Indika Rajapakse; this, according to me, is one of the several applications of math.
–Nalin Pithwa.
Processors suitable for digital control range from standard microprocessors like 8051 to special purpose DSP processors, the primary difference being in the instruction sets and speed(s) of particular instruction(s), such as multiply. Standard microprocessors or general purpose processors are intended for laptops, workstations, and general digital data bookkeeping. Naturally, because digital control involves much numerical computation, the instruction sets for special-purpose DSP processors are rich in math capabilities and are better suited for control applications than the standard microprocessors or general purpose processors.
Processors, such as those found in microwave ovens have a broad range of requirements. For example, the speed, instruction set, memory, word length, and addressing mode requirements are all very minimal for the microwave oven. The consequences of a data error are minimal as well, especially, relative to a data error in a PC/laptop while it is calculating an income tax return. PC’s or laptops, on the other hand, require huge megabytes of memory, and they benefit from speed, error correction, larger word size, and sophisticated addressing modes.
DSP processors generally need speed, word length, and math instructions such as multiply, multiply-and-accumulate, and circular addressing. One typical feature of signal processors not found in general purpose processors is the use of a Harvard architecture, which consists of separate data and program memory. Although separate data and program memory offer significant speed advantages, the IC pin count is higher assuming external memory is allowed because instruction address, instruction data, data address, and data buses are separate. A modified Harvard architecture has been used which maintains some speed advantage, while eliminating the requirement for separate program and data buses, greatly reducing pin count in processors that have external memory capability (almost all have this feature).
While thinking of control versus signal processing applications, in the former, we often employ saturation and therefore absolutely require saturation arithmetic; whereas in the latter, to ensure signal fidelity, in most signal processing applications the algorithms must be designed to prevent saturation by scaling signals appropriately.
The consequences of numerical overflow in control computations can be serious, even destabilizing. In most forms of numerical computation, it is usually better to suffer the non-linearity of signal saturation than the effects of numerical overflow.
For most control applications, it is advantageous to select a processor that does not require much support hardware. One of the most commonly cited advantages of digital control is the freedom from noise in the control processor. Although it is true that controller noise is nominally limited to equalization noise, it is not true that the digital controller enjoys an infinite immunity from noise. Digital logic is designed with certain noise margins, which of course are finite. When electromagnetic radiation impinges on the digital control system, there ia finite probability of making an error. One of the consequences of digital control is that although it can have a very high threshold of immunity, without error detection and correction it is equally likely that the system will make a large error as a small one — the MSB and the LSB of a bus have equal margin against noise.
In addition to external sources of error-causing signals, the possibility for circuit failure exists. If a digital logic circuit threshold drifts outside the design range, the consequences are usually catastrophic.
For operational integrity, error detection is a very important feature.
I hope to compare, if possible, some families of Digital Control processors here, a bit later.
Regards,
Nalin Pithwa
Reference:
Digital Control of Dynamic Systems, Franklin, Powell and Workman.
Ref: (1) Digital Control of Dynamic Systems by Franklin, Powell and Workman(2) the internet wrb (3) Manuals of various DSP vendors like TI and Analog Devices.
Before examining some of the requirements placed on Digital Signal or Control Processors (DSP’s or DCP’s), let’s define some of the basic parametric considerations in choosing a processor:
We will examine the details soon.
Nalin Pithwa
(Authors: Prof. Navdeep M. Singh, VJTI, University of Mumbai and Nalin Pithwa, 1992).
Abstract: The bilinear transformation can be achieved by using the method of synthetic division. A good deal of simplification is obtained when the process is implemented as a sequence of matrix operations. Besides, the matrices are found to have simple structures suitable for algorithmic implementation.
I) INTRODUCTION:
Davies [1] proposed a method for bilinear transformation using synthetic division. This method can be quite simplified when the synthetic division is carried out as a set of matrix operations because the operator matrices are found to have simple structures and so can be easily generated.
II) THE ALGORITHM:
Given a discrete time transfer function , it is transformed to
in the s-plane under the transformation:
This can be sequentially achieved as :-
The first step is to represent the given characteristic polynomial in the standard companion form. Since the companion form represents a monic polynomial appropriate scaling is required in the course of the algorithm to ensure that the polynomial generated is monic after each step of transformation.
The method is developed for a third degree polynomial and then generalized.
Step 1:
(decreasing of roots by one)
Given (a_{3}=1) (monic)
Then, where
,
,
.
In the companion form, obviously the following transformation is sought:
and
and
Performing elementary row and column transformations on A using matrix operators, the final row operator and column operator matrices, and
, respectively are found to be
and
.
Thus, .
In general, for a polynomial of degree n,
and
, where both the matrices are
.
and , and B is also
.
Where , where
.
Now, the transformation is sought respectively. The row and column operator matrices
and
respectively are :
and
.
and
are found to have the following general structures:
and
, both general matrices
and
, being of dimensions
.
is lower triangular and can be generated as :
, where
and so we get
, where
, and
.
Similarly, is lower triangular and can be generated as :
, where
.
Thus, when A is the companion form of a polynomial of any degree n, then gives
in the companion form.
Step 2:
(scaled inversion).
Let where
,
,
.
The scaling of the entire polynomial by ensures that the polynomial generated is monic and hence, can be represented in the companion form.
The following transformation is sought:
The row and column operator matrices and
respectively are:
and
In general, and
, where both the general matrices
and
are of dimensions
.
So, we get , which is also a matrix of dimension
.
Step 3:
, with
(scaling of roots)
If , then
.
The following transformation is sought:
and
.
The row and column operators, and
, respectively are:
, and
In general, , and
, where the general matrices
and
are both of dimensions
and
Step 4:
(increasing of roots by one)
For the third degree case, the following transformation is sought:
and
and
,
where ,
,
.
The row and column operators and
are :
and
In general, , and
, both the general matrices
and
being of dimensions
.
Where and
, where
and
is a lower triangular matrix where
.
In general, we have
where
where
where
and so on
Now, the transformation is to be achieved. The row and column operators,
and
, respectively are:
and
In general, and
, where both the general matrices
and
are of dimensions
.
, a lower triangular matrix can be easily generated as:
;
.
and , also a lower triangular matrix can be easily generated as:
; and
when
; and
.
Thus, finishes the process of bilinear transformation. Steps 2 and 3 can be combined so that the algorithm reduces to three steps only.
If the original polynomial is non-monic (that is, ), then multiplying the final tranformed polynomial by
restores it to the standard form.
III. Stability considerations:
In the plane, the Schwarz canonical approach can be applied as an algorithm directly to the canonical form of bilinear transformation of the polynomial obtained previously because a companion form is a non-derogatory matrix.
IV. An Example:
Similarly, and
Hence,
Steps 2 and 3:
Step 4:
,
,
,
.
,
,
,
, so we get
The final monic polynomial is , and multiplying it by
, that is,
, restores it to the non-monic form:
V. Conclusion:
Since the operator matrices have lesser non-zero elements, storage requirements are lesser. The computational complexity should reduce for higher-order systems because the non-zero elements lesser manipulations are also lesser, besides lesser storage requirements. Additionally, the second and third steps can be combined giving a three step method only. Thus, the algorithm easily achieves bilinear transformation, especially, for higher systems compared to other available methods hitherto.
VI. References: