• R/O
  • SSH

提交

标签
No Tags

Frequently used words (click to add to your profile)

javac++androidlinuxc#windowsobjective-ccocoa誰得qtpythonphprubygameguibathyscaphec計画中(planning stage)翻訳omegatframeworktwitterdomtestvb.netdirectxゲームエンジンbtronarduinopreviewer

Commit MetaInfo

修订版540326872dd8476397043be71f24084cf8d16a3c (tree)
时间2022-07-12 23:50:29
作者Albert Mietus < albert AT mietus DOT nl >
CommiterAlbert Mietus < albert AT mietus DOT nl >

Log Message

asis

更改概述

差异

diff -r d27b4b065457 -r 540326872dd8 CCastle/2.Analyse/8.ConcurrentComputingConcepts.rst
--- a/CCastle/2.Analyse/8.ConcurrentComputingConcepts.rst Tue Jul 12 13:20:19 2022 +0200
+++ b/CCastle/2.Analyse/8.ConcurrentComputingConcepts.rst Tue Jul 12 16:50:29 2022 +0200
@@ -27,25 +27,20 @@
2727
2828 As there is many theory available and even more practical expertise, but a limited set of “common words”, let describe
2929 some basic terms. As always, we use Wikipedia as common ground, and add links for a deep-dive.
30-
31-.. include:: CCC-sidebar-concurrency.irst
32-
30+|BR|
31+Again, we use ‘task’ as the most generic term of work-to-be-executed; that can be (in) a process, (on) a thread, (by) a
32+computer, etc.
3333
3434
35-TODO
36-************************************************************
37-
38-.. todo:: All below is draft and needs work!!!!
39-
40-
35+.. include:: CCC-sidebar-concurrency.irst
4136
4237 Concurrency
4338 -----------
4439
45-Concurrency_ is the ability to “compute” multiple things at the same time.
40+Concurrency_ is the ability to “compute” multiple *tasks* at the same time.
4641 |BR|
4742 Designing concurrent software isn’t that complicated but; demands another mindset the when we write software that does
48-one thing afer the other.
43+one tasks afer the other.
4944
5045 A typical example is a loop: suppose we have a sequence of numbers and we like to compute the square of each one. Most
5146 developers will loop over those numbers, get one number, calculate the square, store it in another list, and continue.
@@ -61,7 +56,6 @@
6156 sequential execution is allowed too.
6257
6358
64-
6559 Parallelisme
6660 ------------
6761
@@ -69,7 +63,7 @@
6963 task (of the same program) on “as many cores as possible”. When we assume a thousand cores, we need a thousand
7064 independent tasks (at least) to gain maximal speed up. A thousand at any moment!
7165 |BR|
72-It’s not only about doing a thousand things at the same time (that is not to complicated, for a computer), but also —
66+It’s not only about doing a thousand tasks at the same time (that is not to complicated, for a computer), but also —
7367 probably: mostly — about finishing a thousand times faster…
7468
7569 With many cores, multiple program-steps can be executed at the same time: from changing the same variable, acces the
@@ -78,24 +72,51 @@
7872
7973
8074 Distributed
81------------
75+~~~~~~~~~~~
8276
8377 A special form of parallelisme is Distributed-Computing_: compute on many computers. Many experts consider this
8478 as an independent field of expertise; still --as Multi-Core_ is basically “many computers on a chips”-- its there is an
8579 analogy [#DistributedDiff]_, and we should use the know-how that is available, to design out “best ever language”.
8680
87-Messages & shared-data
88-----------------------
89-
90-Communication between two (concurrent) tasks (or processes, CPUs, computers) needs the passing of data (in one or two
91-direction). Roughly, there are two ways to do so:
92-
93-Shared-Data
94- Memory (variables) that can be written and/or read by both. As the acces is typical not acces, a bit of
9581
9682
97-Controll
98-========
83+Communication
84+-------------
85+
86+When tasks run in various environments they have to communicate: to pass data and to controll progress. Unlike in a
87+sequential program -- where the controll is trivial, as sharing data-- this needs a bit of extra effort.
88+|BR|
89+There are two main approches: shared-data of message-passing.
90+
91+Shared Memory
92+~~~~~~~~~~~~~
93+In this model all tasks (thread or process) have some shared/common memory; typically “variables”. As the acces is
94+asynchronous, the risk exist the data is updated “at the same time” by two or more tasks. This can lead to invalid data;
95+and so Critical-Sections_ are needed.
96+
97+This is a very basic model which assumes that there is physically memory that can be shared. In distributed systems this
98+is uncommon; but for threads it’s straightforward. As disadvantage of this model is that is hazardous: Even when a
99+single access to such a shared variable is not protected by a Critical-Section_, the whole system can break [#OOCS]_.
100+
101+
102+Messages
103+~~~~~~~~
104+A more modern approach is Message-Passing_. One task send a message (sometimes called “event”) to another task; as there
105+is a distinct sender and receiver -- and apparently no common/shared memory-- no Critical-Sections [#MPCS]_ are
106+needed. At least no explicitly. Messages can be used by all kind of task; even in a distributed system -- then the
107+message (and it data) is serialised, transmitted over a network and deserialised. Which can introduce some overhead and
108+delay.
109+|BR|
110+Many people use this networking mental model when they thing about Message-Passing_, and *wrongly* assume there is
111+always overhead. When (carefully) implemented this is not needed; and can be as efficiently as shared-memory (assuming
112+there is shared-memory to can be used).
113+
114+
115+************************************************************
116+
117+.. todo:: All below is draft and needs work!!!!
118+
119+
99120
100121 Models
101122 ======
@@ -105,6 +126,9 @@
105126
106127 Actors
107128
129+Actor-Model_
130+Actor-Model-Theory_
131+
108132 A very introduce
109133
110134 --------
@@ -127,15 +151,27 @@
127151 |BR|
128152 But that condition does apply to Multi-Core_ too. Although the (timing) numbers do differ.
129153
154+.. [#OOCS]
155+ The brittleness of Critical-Sections_ can be reduced by embedding (the) (shared-) variable in an OO abstraction. By
156+ using *getters and *setters*, that controll the access, the biggest risk is (mostly) gone. That does not, however,
157+ prevent deadlocks_ nor livelocks_. Also see the note below.
158+
159+.. [#MPCS]
160+ This is not completely correct; Message-Passing_ can be implemented on top of shared-memory. Then, the implementation
161+ of this (usually) OO-abstraction contains the Critical-Sections_; a bit as described in the footnote above.
162+
163+
164+
130165 .. _pthreads: https://en.wikipedia.org/wiki/Pthreads
131166 .. _Threads: https://en.wikipedia.org/wiki/Thread_(computing)
132167 .. _Multi-Core: https://en.wikipedia.org/wiki/Multi-core_processor
133168
134169 .. _deadlocks: https://en.wikipedia.org/wiki/Deadlock
135170 .. _livelocks: https://en.wikipedia.org/wiki/Deadlock#Livelock
136-.. _Critical-Sections: https://en.wikipedia.org/wiki/Critical_section
171+.. _Critical-Section: https://en.wikipedia.org/wiki/Critical_section
172+.. _Critical-Sections: Critical-Section_
137173 .. _Distributed-Computing: https://en.wikipedia.org/wiki/Distributed_computing
138-
174+.. _Message-Passing: https://en.wikipedia.org/wiki/Message_passing
139175 .. _Actor-Model: https://en.wikipedia.org/wiki/Actor_model
140176 .. _Actor-Model-Theory: https://en.wikipedia.org/wiki/Actor_model_theory
141177
diff -r d27b4b065457 -r 540326872dd8 Makefile
--- a/Makefile Tue Jul 12 13:20:19 2022 +0200
+++ b/Makefile Tue Jul 12 16:50:29 2022 +0200
@@ -52,7 +52,7 @@
5252
5353 wc:
5454 @echo "lines words file"
55- @wc -lw `find CCastle/ -iname \*rst`|sort -r
55+ @wc -lw `find CCastle/ -iname \*rst`|sort -r | grep -v /index.rst | grep -v /zz.todo.rst
5656
5757 sidebar:
5858 @grep "include::" `find CCastle/ -type f -name \*.rst` /dev/null | grep sidebar| sort| sed 's/:../:\t\t ../'