Книжная полка Сохранить
Размер шрифта:
А
А
А
|  Шрифт:
Arial
Times
|  Интервал:
Стандартный
Средний
Большой
|  Цвет сайта:
Ц
Ц
Ц
Ц
Ц

Neural networks fundamentals in mobile robot control systems

Покупка
Основная коллекция
Артикул: 756686.01.99
Доступ онлайн
187 ₽
В корзину
Включает полное и систематизированное изложение материала по учебной программе курса «Интеллектуальные системы управления роботами». Адресован студентам, обучающимся по программам бакалавриата и магистратуры по специальности «Мехатроника и робототехника» Института радиотехники и систем управления Южного федерального университета. Включает темы, посвященные введению в нейронные сети, их применению, основам обучения нейронных сетей, многослойным нейронным сетям с прямой связью, передовым методам обучения нейронных сетей и варианты индивидуальных упражнений.
Медведев, М. Ю. Neural networks fundamentals in mobile robot control systems : учебное пособие / М. Ю. Медведев, А. Е. Кульченко ; Южный федеральный университет. - Ростов-на-Дону ; Таганрог : Издательство Южного федерального университета, 2020. - 144 с. - ISBN 978-5-9275-3587-3. - Текст : электронный. - URL: https://znanium.com/catalog/product/1308419 (дата обращения: 26.04.2024). – Режим доступа: по подписке.
Фрагмент текстового слоя документа размещен для индексирующих роботов. Для полноценной работы с документом, пожалуйста, перейдите в ридер.
МИНИСТЕРСТВО НАУКИ И ВЫСШЕГО ОБРАЗОВАНИЯ РОССИЙСКОЙ ФЕДЕРАЦИИ Федеральное государственное автономное образовательное учреждение высшего образования «ЮЖНЫЙ ФЕДЕРАЛЬНЫЙ УНИВЕРСИТЕТ»


Инженерно-технологическая академия






        М. Ю. МЕДВЕДЕВ
        А. Е. КУЛЬЧЕНКО






        NEURAL NETWORKS FUNDAMENTALS IN MOBILE ROBOT CONTROL SYSTEMS




Учебное пособие












Ростов-на-Дону - Таганрог
        Издательство Южного федерального университета
2020

УДК 004.032.26:004.896(075.8)
ББК 32.97я73
     М46
Печатается по решению кафедры электротехники и мехатроники Института радиотехнических систем и управления
Южного федерального университета (протокол № 5 от 17 марта 2020 г.)
Рецензенты:
ведущий программист ООО «Люксофт Профешнл»
(филиал в г. Санкт-Петербурге), кандидат технических наук
В. А. Крухмалев
профессор кафедры систем автоматического управления ИРТСУ ЮФУ, доктор технических наук, профессор А. Р. Гайдук
   Медведев, М. Ю.
М46 Neural networks fundamentals in mobile robot control systems : учебное пособие / М. Ю. Медведев, А. Е. Кульченко ; Южный федеральный университет. - Ростов-на-Дону ; Таганрог : Издательство Южного федерального университета, 2020. - 144 с.
        ISBN 978-5-9275-3587-3
        Включает полное и систематизированное изложение материала по учебной программе курса «Интеллектуальные системы управления роботами». Адресован студентам, обучающимся по программам бакалавриата и магистратуры по специальности «Мехатроника и робототехника» Института радиотехники и систем управления Южного федерального университета. Включает темы, посвященные введению в нейронные сети, их применению, основам обучения нейронных сетей, многослойным нейронным сетям с прямой связью, передовым методам обучения нейронных сетей и варианты индивидуальных упражнений.

УДК 004.032.26:004.896(075.8)
ББК 32.97я73
ISBN 978-5-9275-3587-3





                                  © Южный федеральный университет, 2020
                                  © Медведев М. Ю., Кульченко А. Е., 2020
                                  © Оформление. Макет. Издательство
Южного федерального университета, 2020

        CONTENT


1.  LECTURE: INTRODUCTION TO NEURAL NETWORKS.....................    6
    1.1. Application of artificial intelligence in robotics......... 6
    1.2. Structure of an intelligent control system of robot........ 7
    1.3. The artificial intelligence technologies taxonomy.......... 8
    1.4. Morphology of a biological neuron.......................... 9
    1.5. Mathematical model of a biological neuron.................. 9
    1.6. A neural model for a threshold logic...................... 10
    1.7. A neural threshold logic synthesis........................ 12
    1.8. Problems.................................................. 14
  Practical training 1 ............................................ 15
    1.9. Task for practical training 1 ............................ 15
    1.10. Example of the practical training performing............. 16
    1.11. Variants................................................. 18
    1.12. Requirements to the results representation............... 19
  Practical training 2............................................. 20
    1.13. Task for practical training 2............................ 20
    1.14. Example of the practical training 2 performing........... 22
    1.15. Variants................................................. 24
    1.16. Requirements to the results representation............... 24

2.  LECTURE: BASES OF LEARNING OF NEURAL NETWORKS 26
    2.1. Parametric adaptation of the neural threshold element...    26
    2.2. The perceptron rule of adaptation......................... 27
    2.3. Mays adaptation rule...................................... 28
    2.4. Adaptive linear element................................... 29
    2.5. a - Least Mean Square Algorithm........................... 29
    2.6. Mean Square Error Method.................................. 31
    2.7. ц - Least Mean Square Algorithm........................... 32
    2.8. Adaline with sigmoidal functions.......................... 32
    2.9. Backpropagation method.................................... 33
    2.10. A simple network with three neurons...................... 34
    2.11. Backpropagation learning................................. 35
    2.12. Problems................................................. 37

3

Content

  Practical training 3 ............................................. 38
    2.13.  Task for practical training 3 ........................... 38
    2.14.  Example of the practical training 3 performing........... 40
    2.15.  Variants................................................. 46
    2.16.  Requirements to the results representation............... 46
  Practical training 4.............................................. 48
    2.17.  Task for practical training 4............................ 48
    2.18.  Example of the practical training 4 performing........... 49
    2.19.  Variants................................................. 55
    2.20.  Requirements to the results representation............... 56

3.  LECTURE: MULTILAYERED FEEDFORWARD STA-TIC NEURAL NETWORKS........................................................ 58
    3.1. Two layered neural network mathematical description.......  58
    3.2. Generalized delta rule..................................... 60
    3.3. Network with linear output neurons......................... 62
    3.4. Structure of a multi-layered feedforward neural network...  62
    3.5. Description of a multi-layered feedforward neural network.  63
    3.6. Generalized Delta Rule for MFNN............................ 64
    3.7. Recursive computation of delta............................. 64
    3.8. Momentum BP algorithm...................................... 65
    3.9. A Summary of BP learning algorithm......................... 66
    3.10. Some issues in BP learning algorithm...................... 67
    3.11. Local minimum problem..................................... 70
    3.12.  Problems................................................. 70
  Practical training 5.............................................. 72
    3.13. Task for practical training 5............................. 72
    3.14. Example of the practical training 5 performing............ 72
    3.15. Variants.................................................. 91
    3.16. Requirements to the results representation................ 92
  Practical training 6.............................................. 93
    3.17  T ask for practical training 6............................ 93
    3.18. Example of the practical training 6 performing............ 93
    3.19. Variants................................................. 104

4

Content

    3.20. Requirements to the results representation............ 105

4.  LECTURE: ADVANCED METHODS FOR LEARNING NEURAL NETWORKS.................................................... 106
    4.1. Different Criteria for Error Measure................... 106
    4.2. Complexities in Regularization......................... 108
    4.3. Weight Decay Approach.................................. 108
    4.4. Weight Elimination Approach............................ 109
    4.5. Chauvin’s Penalty Approach............................. 110
    4.6. Network Pruning Through Sensitivity Calculation........ 110
    4.7. Karnin’s Pruning Method................................ 112
    4.8. Optimal Brain Damage................................... 112
    4.9. Calculation of the Hessian Matrix...................... 114
    4.10. Second-order Optimization Learning Algorithms......... 117
    4.11. Recursive Estimation Learning Algorithms.............. 119
    4.12. Tapped Delay Line Neural Networks..................... 122
    4.13. Applications of TDLNN for Adaptive Control Systems...   122
    4.14. Problems.............................................. 124
Practical training 7.......................................... 125
    4.15. T ask for practical training 7........................ 125
    4.16. Example of the practical training 7 performing........ 126
    4.17. Variants.............................................. 141
    4.18. Requirements to the results representation............ 141

BIBLIOGRAPHY.................................................... 143

        1. LECTURE


        INTRODUCTION TO NEURAL NETWORKS


        1.1. Application of artificial intelligence in robotics


      Nowadays, the automatic control of unmanned mobile objects is more efficient than the remote control. Autopilot is more accurate and faster to make decisions than the driver. Autopilot makes no mistakes (Fig 1.1) [1-3].


Fig 1.1. Autopilots vs remote control

       Therefore, the mainstream of robotics is to increase the number of autonomously performed operations.
       However, there are hard problems for an automatic control (Fig 1.1-1.2). They are driving a traffic, estimation, criteria determination, making a decision under uncertain environments, etc. Characteristic of the hard problems are a dynamical environment, uncertainties, singularities, conflicting criteria, large number of solutions, incorrect statement [4-7].


6

1.2.   Structure of an intelligent control system of robot

dynamical environment

conflicting criteria

Fig. 1.2. Hard problems

        Intelligent systems are used to solve hard problems.
        In robotics, an intelligent system is a system that solves the problem of goal-setting, planning and control in a dynamical uncertain environment without the participation of the operator.


        1.2.  Structure of an intelligent control system of robot

       Artificial intelligence solves the following problems (Fig 1.3).
     1.        The global planning problems are criteria determination, global path planning.
     2.        The local planning problems are obstacle avoiding, uncertainty of the environment elimination, prediction of environment.
     3.        The motion control problems are adaptation to model of robot, adaptation to disturbances.


7

1.Lecture. Introduction to neural networks

Fig. 1.3. Structure of an intelligent control system of robot


         1.3. The artificial intelligence technologies taxonomy

       The artificial intelligence approaches are based on different technologies.
       1.       An expert knowledge base emulates the experience of the experts in the subject area. Examples are diagnostic systems in medicine and engineering. Advantage of an expert knowledge base is unlimited numbers of an expert. Disadvantage is difficulty of the expert coping. Expert knowledge bases are used as recommendation systems.
       2.       Fuzzy logic emulates a human cognitive function (learning, thinking, reasoning, and adaptation) by uncertain sets. Fuzzy logic uses uncertain notations like “big velocity”, “low temperature”, etc. The main trait of fuzzy logic is uncertain boundaries between fuzzy sets. Fuzzy logic is applied for a decision-making problem.


8

1.5. Mathematical model of a biological neuron

       3.        Evolutionary algorithms emulates the mechanisms of natural selection. The most known evolutionary algorithm is a genetic algorithm. Evolutionary algorithms are applied for a global search of the optimal solutions instead of the brute-force searching.
       4.        Bio-inspired algorithms emulates individual and cooperative behavior of living nature. Well known bio-inspired algorithms are ant algorithms, pigeon algorithms, and bee algorithms.
       5.        Artificial Neural Network is a mathematical model of a human brain. Neural networks are used as learning systems.


        1.4. Morphology of a biological neuron


        The intellectual functions of the brain make humans adaptive for handling

complex, uncertain, and time-varying environments. The human brain consists of 10¹⁰ - 10¹¹ biological neurons. A schematic diagram of a biological neuron is shown in figure 1.4.

Fig. 1.4. A schematic diagram of a biological neuron

        A biological neuron receives about 10 000 inputs (dendrites), processes the inputs, and generates single output. The output (axon) is connecting with about 10 000 dendrites of other biological neurons. Synapses are connections of dendrites of the neuron and axons of other neurons. The input signals are impulses. Dendrites transmit the input impulses to soma. This transmission can be either excitatory or inhibitory. Soma aggregates signals from dendrites and generates output impulses to axon.


        1.5. Mathematical model of a biological neuron

      A mathematical representation of a biological neuron is shown in figure 1.5.
      The inputs of the neuron are signals xi...Xn from other neurons, and threshold xo. Synaptic operation assigns a weight wi^Wn to each input xi.. .Xn


9

1.Lecture. Introduction to neural networks

according to the past experience. Thus, weights wi^wn are memory of the neuron.

Fig. 1.5. A mathematical representation of a biological neuron
        Somatic operation provides aggregation, thresholding, and nonlinear activation. Usually aggregation is a sum operation. Threshold adjusts a sensitivity of the neuron to the level of aggregated signal. Nonlinear activation is nonlinear static function with saturation.


        1.6. A neural model for a threshold logic


       The first mathematical model of a neuron was proposed by McCulloch and Pitts (i943). This model is a threshold logic element. The element consists of n logic inputs Xi, i=1,2,.. ,,n, and one logic output y.
       Output y is modeled as follows:

n

У =

1.!/^
i=l

WiXt > w₀.

(1.1)

-1.

‘fl
i=l

WiXt < Wo
n

       The schematic representation of a threshold logic element is shown in figure 1.6.
       A threshold logic element executes logic operations AND, OR, NOT. Figure 1.7 shows these operations.

10

1.6. A neural model for a threshold logic

Fig. 1.6. A mathematical model of a threshold logic element

AND

NOT

Fig. 1.7. A threshold logic implementation of OR, AND, and NOT

11

l.Lecture. Introduction to neural networks


        1.7. A neural threshold logic synthesis


      Let introduce the notion of an augmented vector of synaptic weights wₐ. Augmented vector Wa is defined as follows:
                 wₐ = [w₀ wₓ ... w„]7.                        (1.2)
      Augmented vector of neural inputs xₐ is defined as follows (x₀=1):
xₐ = [*o x₁ ... xₙ]T.                     (1.3)
      Thus, we can write the neural signals using the notion of the augmented vectors as

                      у = signum(wa^Xa').

(1.4)

       Expression (1.4) can be described by a binary logic. Let us consider the augmented vectors Xa = [1 Xi X2 хз] and Wa = [-1 3 4 5]. The output signal y is

у = signum ([-1 3 4 5][1 Xi x₂ x₃]r) = = signum(-1 + 3x₁ + 4x₂ + 5x₃).

(1.5)

Let us the inputs are logic variables xie{-1, 1}. Then the truth table is

XI -1 1  -1 1  -1 1  -1 1
X2 -1 -1 1  1  -1 -1 1  1
X3 -1 -1 -1 -1 1  1  1  1
y  -1 -1 -1 1  -1 1  1  1

From the truth table we obtain

                 у = Х1(х2Хз + X2X3) + X2X3.                    (1.6)
      Let us consider the following logic function
у = Х1⁽Х2+Хз⁾.                           (1.7)


       Output y=-1 for both X1X2X3 and not(x1X2X3). Therefore
w₁ + w₂ + w₃ < w₀, and —w₁ — w₂—w₃ <w₀
       The last inequalities are conflicting. Thus, no weights and threshold values can satisfy them. Function (1.7) cannot be realized by a single threshold element.

12

1.7. A neural threshold logic synthesis


       A switching function that can be realized by a single threshold element is called a threshold function. A threshold function is also called a linearly separable function (fig 1.8). It means that hyper plane waTxa divides all values of a threshold function in the next way. All the true points are on one side of the hyperplane, and the false points are on the other side of the hyper plane.

Fig. 1.8. Functions: a) a linearly separable function; b) a linearly non-separable function (XOR)

       Realization of a linearly non-separable function requires network of threshold elements (fig 1.8). Decomposition of the non-threshold function is way to synthesis such network. Any given switching function may be realized by a two-layered threshold network, as shown in figure 1.9. The intermediate variables z1, z2, ..., zm, may be computed by

Zj = signum


        (M


(1.8)

      The output of the network is computed by


у = signum ^^ wfzi^ = signum ^^ w? ^signum ^^ w/jx,-^^. (1.9)

Fig. 1.9. A neural threshold network

13

Доступ онлайн
187 ₽
В корзину