• R/O
  • SSH

提交

标签
No Tags

Frequently used words (click to add to your profile)

javac++androidlinuxc#windowsobjective-ccocoa誰得qtpythonphprubygameguibathyscaphec計画中(planning stage)翻訳omegatframeworktwitterdomtestvb.netdirectxゲームエンジンbtronarduinopreviewer

Commit MetaInfo

修订版9dd4e501c7d400dffe43990e8c8bacb719591c80 (tree)
时间2022-09-25 20:05:34
作者Albert Mietus < albert AT mietus DOT nl >
CommiterAlbert Mietus < albert AT mietus DOT nl >

Log Message

AsIs

更改概述

差异

diff -r ef900cf664bc -r 9dd4e501c7d4 CCastle/2.Analyse/8.ConcurrentComputingConcepts.rst
--- a/CCastle/2.Analyse/8.ConcurrentComputingConcepts.rst Mon Sep 19 22:02:08 2022 +0200
+++ b/CCastle/2.Analyse/8.ConcurrentComputingConcepts.rst Sun Sep 25 13:05:34 2022 +0200
@@ -1,4 +1,4 @@
1-.. include:: /std/localtoc.irst
1+.. include:: /std/localtoc2.irst
22
33 .. _ConcurrentComputingConcepts:
44
@@ -8,7 +8,7 @@
88
99 .. post::
1010 :category: Castle DesignStudy
11- :tags: Castle, Concurrency, DRAFT§
11+ :tags: Castle, Concurrency, DRAFT
1212
1313 Sooner as we realize, even embedded systems will have piles & heaps of cores, as I described in
1414 “:ref:`BusyCores`”. Castle should make it easy to write code for all of them: not to keep them busy, but to maximize
@@ -91,7 +91,7 @@
9191 Communication takes time, especially *wall time* [#wall-time]_ (or clock time), and may slow down computing. Therefore
9292 communication has to be efficient. This is an arduous problem and becomes harder when we have more communication, more
9393 concurrency, more parallelism, and/or those tasks are short living. Or better: it depends on the ratio of
94-time-between-communications and the time-between-two-communications.
94+time-between-communications and the time-between-two-communications.
9595
9696
9797 Shared Memory
@@ -100,14 +100,17 @@
100100 In this model all tasks (usually threads or processes) have some shared/common memory; typically “variables”. As the access
101101 is asynchronous, the risk exists the data is updated “at the same time” by two or more tasks. This can lead to invalid
102102 data and so Critical-Sections_ are needed.
103-
103+|BR|
104104 This is a very basic model which assumes that there is physical memory that can be shared. In distributed systems this
105-is uncommon, but for threads it’s straightforward. A disadvantage of this model is that is hazardous: Even when a
106-single modifier of a shared variable is not protected by a Critical-Section_, the whole system can break [#OOCS]_.
105+is uncommon, but for threads it’s straightforward.
107106
108-The advantage of shared memory is the fast *communication-time*. The wall-time and CPU-time are roughly the same: the
107+An advantage of shared memory is the fast *communication-time*. The wall-time and CPU-time are roughly the same: the
109108 time to write & read the variable added to the (overhead) time for the critical section -- which is typically the
110109 bigger part.
110+|BR|
111+The big disadvantage of this model is that is hazardous: The programmer needs to insert Critical_Sections into his code
112+at all places that *variable* is used. Even a single acces to a shared variable, that is not protected by a
113+Critical-Section_, can (will) break the whole system [#OOCS]_.
111114
112115
113116 Messages
@@ -224,11 +227,11 @@
224227 Message can be sent to one receiver, to many, or even to everybody. Usually this is modeled as an characteristic of the
225228 channel. And at the same time, that channel can be used to send message in oneway, or in two-ways.
226229
227-It depends on the context on the exact intent. By example in (TCP/IP) networking, `Broadcasting
228-<https://en.wikipedia.org/wiki/Broadcasting_(networking)>`_ (and al variants that are not point-to-point) focus on
229-reducing the amount of data on the network itself. In distributed computing `Broadcasting
230-<https://en.wikipedia.org/wiki/Broadcast_(parallel_pattern)>`_ is a parallel Design pattern. Whereas the `Broadcast flag
231-<https://en.wikipedia.org/wiki/Broadcast_flag>`_ in TV steaming is a complete other idea: is it allowed to save
230+It depends on the context on the exact intent. By example in (TCP/IP) `networking, ‘Broadcasting’
231+<https://en.wikipedia.org/wiki/Broadcasting_(networking)>`__ (all not point-to-point variants) focus on reducing the
232+amount of data on the network itself. In `distributed computing ‘Broadcasting’
233+<https://en.wikipedia.org/wiki/Broadcast_(parallel_pattern)>`__ is a parallel Design pattern. Whereas the `’Broadcast’
234+flag <https://en.wikipedia.org/wiki/Broadcast_flag>`_ in TV steaming is a complete other idea: is it allowed to save
232235 (record) a TV broadcast...
233236
234237 We use those teams on the functional aim. We consider the above mentioned RCP connection as **Unidirectional** -- even
@@ -298,27 +301,52 @@
298301 Then, a *faster* conversation with a bit of noise is commonly preferred.
299302
300303
301-------------------------
302304
303-.. todo:: All below is draft and needs work!!!!
304305
305306
306307 Process calculus
307308 ================
308309
309-Probably the oldest model to described concurrency is the
310-(all tokens move at the same timeslot) -- which is a hard to implement (efficiently) on Multi-Core_.
311-
312-Actors
310+.. todo:: All below is draft and needs work!!!!
313311
314-Actor-Model_
315-Actor-Model-Theory_
312+After studying many concurrent concept, we need to adress one more, before we can *design* the Castle-language. That is
313+“*How do we determine what is ‘best’ (ever)*”? Can we *calculate* the performance of every aspect? The answer is no;
314+but there are formal systems that can help: Process-Calculus_ (or -Algebra).
315+|BR|
316+Unfortunately, there are many of them. And I like to avoid the recursions-trap: study them all, to find a meta-calculus
317+to determine the best, etc.
316318
317-A very introduce
319+So lets give a quick overview. And recall, the term ‘process’ is pretty general: it denotes on the ‘behaviour of a
320+system’, not the more limited practice most software-developers use.
318321
319---------
322+Traditional ones
323+----------------
320324
321-END
325+Many Process-Calculus_\es are invented around 1980. As often, those traditional ones focus on the issues that where actual
326+back them. And although the are still useful, they might be unaware of moderen aspects of computing -- like huge code
327+bases, and over thousand of cores.
328+
329+Communicating sequential processes
330+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
331+
332+CSP_ is probably the oldest and best known *formal* language to describe (patterns in) concurrent systems. I started in
333+1978 as a kind of programming language, and has evolved since them. Occam_ --the language to program the one famous
334+Transputer_-- is based on CSP_.
335+
336+Also ‘Go_’ (the language) is influenced by CSP_. A sign the CSP_ isn’t to old.
337+
338+
339+Calculus of Communicating Systems
340+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
341+
342+CCS_ is also quite old (1980) and quite useful to calculate deadlocks_ and livelocks_
343+
344+Algebra of Communicating Processes
345+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
346+
347+Also ACP_ dates back from 1982 and is a real algebra -- it probably contrive the general term Process-Calculus_
348+
349+aaa
322350
323351 ----------
324352
@@ -343,7 +371,9 @@
343371 .. [#OOCS]
344372 The brittleness of Critical-Sections_ can be reduced by embedding (the) (shared-) variable in an OO abstraction. By
345373 using *getters and *setters*, that controll the access, the biggest risk is (mostly) gone. That does not, however,
346- prevent deadlocks_ nor livelocks_. Also see the note below.
374+ prevent deadlocks_ nor livelocks_.
375+ |BR|
376+ And still, all developers has be disciplined to use that abstraction ... always.
347377
348378 .. [#MPCS]
349379 This is not completely correct; Message-Passing_ can be implemented on top of shared-memory. Then, the implementation
@@ -378,10 +408,14 @@
378408 .. _Distributed-Computing: https://en.wikipedia.org/wiki/Distributed_computing
379409 .. _Message-Passing: https://en.wikipedia.org/wiki/Message_passing
380410 .. _Events: https://en.wikipedia.org/wiki/Event_(computing)
381-.. _Actor-Model: https://en.wikipedia.org/wiki/Actor_model
382-.. _Actor-Model-Theory: https://en.wikipedia.org/wiki/Actor_model_theory
383411 .. _RPC: https://en.wikipedia.org/wiki/Remote_procedure_call
384412 .. _Broadcasting: https://en.wikipedia.org/wiki/Broadcasting_(networking)
385413 .. _Reliability: https://en.wikipedia.org/wiki/Reliability_(computer_networking)
386414 .. _Process-Calculus: https://en.wikipedia.org/wiki/Process_calculus
387415 .. _Futures: https://en.wikipedia.org/wiki/Futures_and_promises
416+.. _CSP: https://en.wikipedia.org/wiki/Communicating_sequential_processes
417+.. _Occam: https://en.wikipedia.org/wiki/Occam_(programming_language)
418+.. _Transputer: https://en.wikipedia.org/wiki/Transputer
419+.. _Go: https://en.wikipedia.org/wiki/Go_(programming_language)
420+.. _CCS: https://en.wikipedia.org/wiki/Calculus_of_communicating_systems
421+.. _ACP: https://en.wikipedia.org/wiki/Algebra_of_communicating_processes
diff -r ef900cf664bc -r 9dd4e501c7d4 CCastle/2.Analyse/8b.short_MPA_examples.rst
--- a/CCastle/2.Analyse/8b.short_MPA_examples.rst Mon Sep 19 22:02:08 2022 +0200
+++ b/CCastle/2.Analyse/8b.short_MPA_examples.rst Sun Sep 25 13:05:34 2022 +0200
@@ -1,5 +1,6 @@
11 .. _MPA-examples:
22
3+========================================
34 Everyday Message Passing examples (ToDo)
45 ========================================
56
diff -r ef900cf664bc -r 9dd4e501c7d4 CCastle/2.Analyse/9.ActorAbstraction.rst
--- a/CCastle/2.Analyse/9.ActorAbstraction.rst Mon Sep 19 22:02:08 2022 +0200
+++ b/CCastle/2.Analyse/9.ActorAbstraction.rst Sun Sep 25 13:05:34 2022 +0200
@@ -16,7 +16,10 @@
1616
1717 As an abstraction, those active actors are similar to the :ref:`”Many Core” concept<CC>` we use for CCastle. Hence, we study this model a bit more
1818
19+----------------------
1920
21+Actor-Model_
22+Actor-Model-Theory_
2023
2124
2225
@@ -31,3 +34,6 @@
3134 .. _Actor-Model: https://en.wikipedia.org/wiki/Actor_model
3235 .. _Actor-Model-Theory: https://en.wikipedia.org/wiki/Actor_model_theory
3336 .. _Distributed-Computing: https://en.wikipedia.org/wiki/Distributed_computing
37+
38+.. _Actor-Model: https://en.wikipedia.org/wiki/Actor_model
39+.. _Actor-Model-Theory: https://en.wikipedia.org/wiki/Actor_model_theory
diff -r ef900cf664bc -r 9dd4e501c7d4 std/localtoc2.irst
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/std/localtoc2.irst Sun Sep 25 13:05:34 2022 +0200
@@ -0,0 +1,11 @@
1+.. -*- rst -*-
2+ USE AS:
3+ .. include:: /std/localtoc2.irst
4+
5+.. sidebar:: On this page
6+ :class: localtoc
7+
8+ .. contents::
9+ :depth: 2
10+ :local:
11+ :backlinks: none