修订版 | 9dd4e501c7d400dffe43990e8c8bacb719591c80 (tree) |
---|---|
时间 | 2022-09-25 20:05:34 |
作者 | Albert Mietus < albert AT mietus DOT nl > |
Commiter | Albert Mietus < albert AT mietus DOT nl > |
AsIs
@@ -1,4 +1,4 @@ | ||
1 | -.. include:: /std/localtoc.irst | |
1 | +.. include:: /std/localtoc2.irst | |
2 | 2 | |
3 | 3 | .. _ConcurrentComputingConcepts: |
4 | 4 |
@@ -8,7 +8,7 @@ | ||
8 | 8 | |
9 | 9 | .. post:: |
10 | 10 | :category: Castle DesignStudy |
11 | - :tags: Castle, Concurrency, DRAFT§ | |
11 | + :tags: Castle, Concurrency, DRAFT | |
12 | 12 | |
13 | 13 | Sooner as we realize, even embedded systems will have piles & heaps of cores, as I described in |
14 | 14 | “:ref:`BusyCores`”. Castle should make it easy to write code for all of them: not to keep them busy, but to maximize |
@@ -91,7 +91,7 @@ | ||
91 | 91 | Communication takes time, especially *wall time* [#wall-time]_ (or clock time), and may slow down computing. Therefore |
92 | 92 | communication has to be efficient. This is an arduous problem and becomes harder when we have more communication, more |
93 | 93 | concurrency, more parallelism, and/or those tasks are short living. Or better: it depends on the ratio of |
94 | -time-between-communications and the time-between-two-communications. | |
94 | +time-between-communications and the time-between-two-communications. | |
95 | 95 | |
96 | 96 | |
97 | 97 | Shared Memory |
@@ -100,14 +100,17 @@ | ||
100 | 100 | In this model all tasks (usually threads or processes) have some shared/common memory; typically “variables”. As the access |
101 | 101 | is asynchronous, the risk exists the data is updated “at the same time” by two or more tasks. This can lead to invalid |
102 | 102 | data and so Critical-Sections_ are needed. |
103 | - | |
103 | +|BR| | |
104 | 104 | This is a very basic model which assumes that there is physical memory that can be shared. In distributed systems this |
105 | -is uncommon, but for threads it’s straightforward. A disadvantage of this model is that is hazardous: Even when a | |
106 | -single modifier of a shared variable is not protected by a Critical-Section_, the whole system can break [#OOCS]_. | |
105 | +is uncommon, but for threads it’s straightforward. | |
107 | 106 | |
108 | -The advantage of shared memory is the fast *communication-time*. The wall-time and CPU-time are roughly the same: the | |
107 | +An advantage of shared memory is the fast *communication-time*. The wall-time and CPU-time are roughly the same: the | |
109 | 108 | time to write & read the variable added to the (overhead) time for the critical section -- which is typically the |
110 | 109 | bigger part. |
110 | +|BR| | |
111 | +The big disadvantage of this model is that is hazardous: The programmer needs to insert Critical_Sections into his code | |
112 | +at all places that *variable* is used. Even a single acces to a shared variable, that is not protected by a | |
113 | +Critical-Section_, can (will) break the whole system [#OOCS]_. | |
111 | 114 | |
112 | 115 | |
113 | 116 | Messages |
@@ -224,11 +227,11 @@ | ||
224 | 227 | Message can be sent to one receiver, to many, or even to everybody. Usually this is modeled as an characteristic of the |
225 | 228 | channel. And at the same time, that channel can be used to send message in oneway, or in two-ways. |
226 | 229 | |
227 | -It depends on the context on the exact intent. By example in (TCP/IP) networking, `Broadcasting | |
228 | -<https://en.wikipedia.org/wiki/Broadcasting_(networking)>`_ (and al variants that are not point-to-point) focus on | |
229 | -reducing the amount of data on the network itself. In distributed computing `Broadcasting | |
230 | -<https://en.wikipedia.org/wiki/Broadcast_(parallel_pattern)>`_ is a parallel Design pattern. Whereas the `Broadcast flag | |
231 | -<https://en.wikipedia.org/wiki/Broadcast_flag>`_ in TV steaming is a complete other idea: is it allowed to save | |
230 | +It depends on the context on the exact intent. By example in (TCP/IP) `networking, ‘Broadcasting’ | |
231 | +<https://en.wikipedia.org/wiki/Broadcasting_(networking)>`__ (all not point-to-point variants) focus on reducing the | |
232 | +amount of data on the network itself. In `distributed computing ‘Broadcasting’ | |
233 | +<https://en.wikipedia.org/wiki/Broadcast_(parallel_pattern)>`__ is a parallel Design pattern. Whereas the `’Broadcast’ | |
234 | +flag <https://en.wikipedia.org/wiki/Broadcast_flag>`_ in TV steaming is a complete other idea: is it allowed to save | |
232 | 235 | (record) a TV broadcast... |
233 | 236 | |
234 | 237 | We use those teams on the functional aim. We consider the above mentioned RCP connection as **Unidirectional** -- even |
@@ -298,27 +301,52 @@ | ||
298 | 301 | Then, a *faster* conversation with a bit of noise is commonly preferred. |
299 | 302 | |
300 | 303 | |
301 | ------------------------- | |
302 | 304 | |
303 | -.. todo:: All below is draft and needs work!!!! | |
304 | 305 | |
305 | 306 | |
306 | 307 | Process calculus |
307 | 308 | ================ |
308 | 309 | |
309 | -Probably the oldest model to described concurrency is the | |
310 | -(all tokens move at the same timeslot) -- which is a hard to implement (efficiently) on Multi-Core_. | |
311 | - | |
312 | -Actors | |
310 | +.. todo:: All below is draft and needs work!!!! | |
313 | 311 | |
314 | -Actor-Model_ | |
315 | -Actor-Model-Theory_ | |
312 | +After studying many concurrent concept, we need to adress one more, before we can *design* the Castle-language. That is | |
313 | +“*How do we determine what is ‘best’ (ever)*”? Can we *calculate* the performance of every aspect? The answer is no; | |
314 | +but there are formal systems that can help: Process-Calculus_ (or -Algebra). | |
315 | +|BR| | |
316 | +Unfortunately, there are many of them. And I like to avoid the recursions-trap: study them all, to find a meta-calculus | |
317 | +to determine the best, etc. | |
316 | 318 | |
317 | -A very introduce | |
319 | +So lets give a quick overview. And recall, the term ‘process’ is pretty general: it denotes on the ‘behaviour of a | |
320 | +system’, not the more limited practice most software-developers use. | |
318 | 321 | |
319 | --------- | |
322 | +Traditional ones | |
323 | +---------------- | |
320 | 324 | |
321 | -END | |
325 | +Many Process-Calculus_\es are invented around 1980. As often, those traditional ones focus on the issues that where actual | |
326 | +back them. And although the are still useful, they might be unaware of moderen aspects of computing -- like huge code | |
327 | +bases, and over thousand of cores. | |
328 | + | |
329 | +Communicating sequential processes | |
330 | +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | |
331 | + | |
332 | +CSP_ is probably the oldest and best known *formal* language to describe (patterns in) concurrent systems. I started in | |
333 | +1978 as a kind of programming language, and has evolved since them. Occam_ --the language to program the one famous | |
334 | +Transputer_-- is based on CSP_. | |
335 | + | |
336 | +Also ‘Go_’ (the language) is influenced by CSP_. A sign the CSP_ isn’t to old. | |
337 | + | |
338 | + | |
339 | +Calculus of Communicating Systems | |
340 | +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | |
341 | + | |
342 | +CCS_ is also quite old (1980) and quite useful to calculate deadlocks_ and livelocks_ | |
343 | + | |
344 | +Algebra of Communicating Processes | |
345 | +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | |
346 | + | |
347 | +Also ACP_ dates back from 1982 and is a real algebra -- it probably contrive the general term Process-Calculus_ | |
348 | + | |
349 | +aaa | |
322 | 350 | |
323 | 351 | ---------- |
324 | 352 |
@@ -343,7 +371,9 @@ | ||
343 | 371 | .. [#OOCS] |
344 | 372 | The brittleness of Critical-Sections_ can be reduced by embedding (the) (shared-) variable in an OO abstraction. By |
345 | 373 | using *getters and *setters*, that controll the access, the biggest risk is (mostly) gone. That does not, however, |
346 | - prevent deadlocks_ nor livelocks_. Also see the note below. | |
374 | + prevent deadlocks_ nor livelocks_. | |
375 | + |BR| | |
376 | + And still, all developers has be disciplined to use that abstraction ... always. | |
347 | 377 | |
348 | 378 | .. [#MPCS] |
349 | 379 | This is not completely correct; Message-Passing_ can be implemented on top of shared-memory. Then, the implementation |
@@ -378,10 +408,14 @@ | ||
378 | 408 | .. _Distributed-Computing: https://en.wikipedia.org/wiki/Distributed_computing |
379 | 409 | .. _Message-Passing: https://en.wikipedia.org/wiki/Message_passing |
380 | 410 | .. _Events: https://en.wikipedia.org/wiki/Event_(computing) |
381 | -.. _Actor-Model: https://en.wikipedia.org/wiki/Actor_model | |
382 | -.. _Actor-Model-Theory: https://en.wikipedia.org/wiki/Actor_model_theory | |
383 | 411 | .. _RPC: https://en.wikipedia.org/wiki/Remote_procedure_call |
384 | 412 | .. _Broadcasting: https://en.wikipedia.org/wiki/Broadcasting_(networking) |
385 | 413 | .. _Reliability: https://en.wikipedia.org/wiki/Reliability_(computer_networking) |
386 | 414 | .. _Process-Calculus: https://en.wikipedia.org/wiki/Process_calculus |
387 | 415 | .. _Futures: https://en.wikipedia.org/wiki/Futures_and_promises |
416 | +.. _CSP: https://en.wikipedia.org/wiki/Communicating_sequential_processes | |
417 | +.. _Occam: https://en.wikipedia.org/wiki/Occam_(programming_language) | |
418 | +.. _Transputer: https://en.wikipedia.org/wiki/Transputer | |
419 | +.. _Go: https://en.wikipedia.org/wiki/Go_(programming_language) | |
420 | +.. _CCS: https://en.wikipedia.org/wiki/Calculus_of_communicating_systems | |
421 | +.. _ACP: https://en.wikipedia.org/wiki/Algebra_of_communicating_processes |
@@ -1,5 +1,6 @@ | ||
1 | 1 | .. _MPA-examples: |
2 | 2 | |
3 | +======================================== | |
3 | 4 | Everyday Message Passing examples (ToDo) |
4 | 5 | ======================================== |
5 | 6 |
@@ -16,7 +16,10 @@ | ||
16 | 16 | |
17 | 17 | As an abstraction, those active actors are similar to the :ref:`”Many Core” concept<CC>` we use for CCastle. Hence, we study this model a bit more |
18 | 18 | |
19 | +---------------------- | |
19 | 20 | |
21 | +Actor-Model_ | |
22 | +Actor-Model-Theory_ | |
20 | 23 | |
21 | 24 | |
22 | 25 |
@@ -31,3 +34,6 @@ | ||
31 | 34 | .. _Actor-Model: https://en.wikipedia.org/wiki/Actor_model |
32 | 35 | .. _Actor-Model-Theory: https://en.wikipedia.org/wiki/Actor_model_theory |
33 | 36 | .. _Distributed-Computing: https://en.wikipedia.org/wiki/Distributed_computing |
37 | + | |
38 | +.. _Actor-Model: https://en.wikipedia.org/wiki/Actor_model | |
39 | +.. _Actor-Model-Theory: https://en.wikipedia.org/wiki/Actor_model_theory |
@@ -0,0 +1,11 @@ | ||
1 | +.. -*- rst -*- | |
2 | + USE AS: | |
3 | + .. include:: /std/localtoc2.irst | |
4 | + | |
5 | +.. sidebar:: On this page | |
6 | + :class: localtoc | |
7 | + | |
8 | + .. contents:: | |
9 | + :depth: 2 | |
10 | + :local: | |
11 | + :backlinks: none |