Книжная полка Сохранить
Размер шрифта:
А
А
А
|  Шрифт:
Arial
Times
|  Интервал:
Стандартный
Средний
Большой
|  Цвет сайта:
Ц
Ц
Ц
Ц
Ц

Stochastic modelling for the financial markets. Part 1. Probabilistic tools

Покупка
Артикул: 762625.01.99
Доступ онлайн
150 ₽
В корзину
The main goal of these lectures notes is to give the basic notions of the stochastic calculus such that conditional expectations, predictable processes, martingales, stochastic integrals and Ito's formula. The notes are intended for students of the Mathematics and Economical Faculties. This work was supported by the Ministry of Education and Science of the Russian Federation (Goszadanie No 1.472.2016 FPM).
Stochastic modelling for the financial markets. Part 1. Probabilistic tools : учебное пособие / сост. С. М. Пергаменщиков, Е. А. Пчелинцев. - Томск : Издательский Дом Томского государственного университета, 2017. - 46 с. - Текст : электронный. - URL: https://znanium.com/catalog/product/1717075 (дата обращения: 26.04.2024). – Режим доступа: по подписке.
Фрагмент текстового слоя документа размещен для индексирующих роботов. Для полноценной работы с документом, пожалуйста, перейдите в ридер.

MINISTRY OF EDUCATION AND SCIENCE OF THE RUSSIAN FEDERATION NATIONAL RESEARCH TOMSK STATE UNIVERSITY FACULTY OF MECHANICS AND MATHEMATICS







STOCHASTIC MODELLING FOR THE FINANCIAL MARKETS PART 1. PROBABILISTIC TOOLS




Lectures notes
for the courses "Stochastic Modelling" and "Theory of the Random Processes" taken by most Mathematics students and Economics students (Directions of training 01.03.01 - Mathematics and 38.04.01 Economics)









Tomsk 2017

   APPROVED the Department of Mathematical Analysis
   Head of the Department, Associate Professor L.S. Kopaneva



   REVIEWED and APPROVED Methodical Commission of
the Faculty of Mechanics and Mathematics
   Minutes No from ”” Febrary 2017
   Chairman of the Commission, Associate Professor O.P. Fedorova



   The main goal of these lectures notes is to give the basic notions of the stochastic calculus such that conditional expectations, predictable processes, martingales, stochastic integrals and Ito’s formula. The notes are intended for students of the Mathematics and Economical Faculties.
   This work was supported by the Ministry of Education and Science of the Russian Federation (Goszadanie No 1.172.2016 FPM).



            AUTHORS


   Professor Serguei M. Pergamenshchikov and Associate Professor
Evgeny A. Pchelintsev




© Tomsk State University, 2017

                Contents





1  Introduction                                          4
   1.1  Probability space............................... 4
   1.2  Random variables, vectors and mappings ......... 6
   1.3  Conditional expectations and conditional probabilities 7
   1.4  Stochastic basis............................... 13

2  Markovian moments                                    15

3  Stochastic processes                                 19

4  Optional and Predictable a - fields                  21

5  Martingales                                          27

6  Stochastic integral                                  32

7  Appendix                                             38
   7.1  Caratlieodory’s extension theorem ............. 38
   7.2  Radon - Nikodym theorem ....................... 40
   7.3  Kolmogorov theorem............................. 41

References                                               44


3

Introduction





            1.1  Probability space


Definition 1.1. The measurable space (Q, F, P) is called the probability space, where Q is any fixed universal Ft, F isc- field and P is a probability measure.
   It should be noted that if the set Q is finite or countable then the field (or a - field) F is defined as all subsets of the set Q, i.e. F = {A : A C Q}. Moreover, in this case the probability is defined as

P(A) = X P(M),                     (1.1)
weA
where P({w}) is defined for every ш from Q.


            Examples


   1. The Bernoulli space.
     The set Q = {0,1} and F = {Q , 0, {0} , {1}}- The probability is defined as P({0}) = p and P({1}) = 1 — p for some fixed 0 < p < 1. Note that, if p = 1/2, then we obtain the "throw a coin" model.

4

   2. The binomial space.
      The set Q = {0,1,..., n} and F = {A : A C Q}. In this case for any 0 < k < n the probability is defined as


P({k}) =

pk (1 - p)ra⁻fc

(1-2)


   3. The finite power of the Bernoulli spaces.
      The set Q = {0,1}ⁿ = {wl},. l. ₂- . where wl a re n-dimensional vectors, i.e. wl = (wz>₁,..., wl,ₙ) a nd wlj- e {0,1}. The field F = {A : A C Q} and

P(wl)= pv (1 - p)ⁿ⁻v ,              (1.3)

      where vl = Pn₌₁ ш^.
   4. The infinite power of the Bernoulli spaces.
      The set Q = {0,1}^ = {w}. In th is case w = (wl )z>₁, wl e {0,1} and the set Q is note countable, moreover, this set is isomorphes to interval [0,1] by the natural representation


x = £ wl 2⁻z e [0,1]. l>1


(1-4)

5

      So, for such set Q the a - field F is Вorel a - field generated by the intervals from [0,1] i.e. F = B([0,1]). The probability is the Lebesgue measure on the interval [0,1].



            1.2   Random variables, vectors and mappings



   We remind, that any measurable (Q, F) ^ (R, B(R)) function £ is called a randот variable and (Q, F) ^ (Rn, B(Rⁿ)) is called a random vector. Generally, for any measurable space ((X, B(X)) a measurable (Q, F) ^ (X, B(X)) function is called a random mapping.

    For any nonnegative random variable £ we can define the Lebesgue

integral as

E £ = £ £(w)dP
•Iq

which is called expectation. Note that any random variable £ = £₊ — £-, where £₊ = max(£, 0) and £- = — min(£, 0). So, if E min(£₊, £-) < to, then we can define the expectation in general case as

E £ = E £₊ - E £- .


    The function defined as F(x) = P(£ < x) is called distribution

function.

6

Examples



            1. Construction of the probability space for fixed random variables with values in R.


     Let F be a distribution function on R. Now through Caratheodory’s extension theorem we obtain the probability measure ц on the Borel a - field B(R) for wduch ц(Ь, a) = F(b) — F(a) for any interval (a, b) with a < b. To define the probability space we set Q = R, F = B(R) a nd P = ц. In this case the random variable £(x) = x has the distribution function F.


            2. Construction of the probability space for fixed random variables with values in Rm.


     Let ц be a probability measure on Rm Similarly to the previous example we set Q = Rm, F = B(Rm) a nd P = ц. In this case the random variable £(x) = x has the distribution ц.


            1.3   Conditional expectations and conditional probabilities


   Let now (Q, F, P) be some probability space. Moreover, let now £ be some integrated random variable with values in R and G a some a - field in the prob ability space, i.e. G C F.

7

Definition 1.2. The random variable E (f|G) is called the conditional expectation if the following conditions hold:
   e e (eiG) is G measurable random variable;
   G. For any bounded G measurable random variable a

Eaf = E (aE(e|G)) .                (1-5)

    Note that this definition is correct, i.e. if there exists a G -measurable random variable f satisfying the property (1.5), then it equals to the conditional expatiation. Indeed, if we set

a = sign(f - E(f|G)),

then the equality (1.5) implies that E |f — E(f|G)| = 0, i.e. f = E(f|G) a.s.
We can use the another definition for the conditional expectation also.
Definition 1.3. Ehe mndom variable E (f|G) is called the conditional expectation if the following conditions hold:
   E E (f|G) is G - measurable random variable;


8

  A Ar my A e G

E1a£ = E (1AE(e|G))               (1-6)


   Note that to show the existence of the condition expectations we use the Radon - Nikodym theorem. Indeed, for any A e G we introduce the measure v as

v

(A)= f
A A

e dp.

(1.7)

It is clear that the measure v is finite and, moreover, v ^ P. So, through the Radon - Nikodym theorem, there exists a G - measurable unique random variable p such that

v

(A)= f
A

p dP.

We can do the same construction for any positive random variable £, not necessary integrable. So, we can define the conditional expectation for any positive random variable £. For a general random variable £ we can define the conditional expectation if, E £_ < to. In this case we set


E(eiG ) = E(e₊|G) - E(e-|G).

9

Definition 1.4. Let n be a some random variable. We define the conditional expectation with respect to Ле random variable n as

e (ein) = e (eiGn),

where Gₙ = a {n} is a - field generated by random variable n
   From the definition of the conditional expectation E(£|n) it follows that there exists a some Borel R ^ R function m such that

E ⁽e|n⁾ = т⁽Л).

This function is called the conditional expectation with respect to the fixed values of y, i.e. for any y G R

Е⁽е|П = У) = m⁽y) .                   (1-8)


            Properties of the condition expectations.


   1. If n is a const ant, then E(£|n) = E£.
   2.  Let e and n be two random variables such that the conditional expectations E(£|G) and E(n|G) exist and £ < П a-s- Then E(e|G) < E(n|G) a.s.
   3. If e and n are independents, then E(£|n) = E£.


10

   4. If the a - field generated by the random variable £ is more small than the a - field generated by the random variable y, i.e.

a{£} C a{n} ,


      then E(£|n) = £
   5.  Let £ and n the random variables such that a{£} C a{n}-Then for any integrable random variable у


E⁽Y£In) = ⁶ E⁽T|n) .



    6.  Let A and B t\vo a - fields such that A C B C F. Then



E(£|A) = E(E(£|B)|A).



   7.  Let £ be a square integrated random variable, i.e. E £² < ж. The condition al expectation E(£|G) is the projection in L₂(Q, F, P) into the subspace L₂(Q, G, P), i.e. for any n G L2(Q, G, P)


E (£ - E(£|G))² < E (£ - n)² .



    Let now £ and n be two random variables with the density


11

Доступ онлайн
150 ₽
В корзину