[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Programming prioritisation



Hi all,

I think what Ian is talking about in this message should become a research project for somebody. Surely there is a trickle of funding somewhere for some students working on an XMOS or Adapteva system. It fits in perfectly with the challenge that David's slide show presents. In fact it is the major sticking point. A little brainstorming follows.

On Oct 2, 2012, at 4:23 AM, Ian East <ian.east@xxxxxxxxxxxxxxxxxxxxxxxxx> wrote:

Hi David

I accept that enough processors and interconnect will often obviate prioritisation, but I'm not yet convinced that it would do so always.

Like other things, there will always be a way to express what you want with the model (language) that you have, but it may not be as simple and transparent as with another.  There is precedent on my side as well.  We humans make use of priority all the time in the way we organise.  Maybe it's true that we'd do without it if there were enough of us, and we could communicate sufficiently well.  But communication seems to break down, when our systems gets large, as we know.  Could it be that when you rely too heavily on communication, it gets exponentially harder to do (or program)?

Well, that is what all the programmers outside our list seem to think, so it is our job to prove it is not so!!! My feeling is that David is right and, in fact, that the solution will be easier than many expect. This is based on the following conjecture, based on experience: When you have MORE THAN TWICE as much of a resource as you need, the problem of programming (at least in respect of that resource) becomes far easier.

Due to Moore's Law, everyone has been working in conditions where external communication is extremely scarce compared with raw processing power (including memory access). The comms-to-IPS ratio (physical dimension: bytes per instruction) has been decreasing rapidly, but perhaps, if David is right, that decrease is coming to an end. If so, it's paradigm change time.


My experience with large class frameworks (like Cocoa) is not encouraging.  While the classes may (or may not) be modular, the communication between them is certainly not.  It can be very hard to achieve something very simple – like dropping a box of paper clips and trying to pick up just a few.

This is where we need basic research. After all, in the real world it's easy. How is that?


If a few well-contained shared variables can replace a whole mess of channels, perhaps there is a place yet for them.

I think we can do better - especially since "a whole mess of channels" may cost no more than a whole mess of memory bytes do now. One possibility I thought of is tree broadcasting (physical analogy: QR code [2D barcodes] snapshot, but each broadcaster notes the "camera flash" as an ack to detect full reception, if it is going to reuse that piece of "print area"). Because it's read only, the receivers can do "snapshots" in parallel, and the ack countdown too (if it's something like closing series switches, detected when all are closed). Even subset notification should be pretty cheap, if each tree branch is, say, 1 to subset of 16.


I will admit I'm hard-pressed to come up with a convincing example;  such things sometimes need to wait until they present themselves, and I'm no longer in a situation where I meet many real problems in system design.

"Real" here should be way ahead of the curve, anticipating the changes that David talked about, especially photonic comms. I am talking about some strange-looking algorithm design, that when the hardware catches up raises its hand and says "here I am!"

Larry


There remains at least the formal and philosophical question as to how to build priority into a programming language.  It's been fun to think about.

cheers
Ian
PS I really wish I had the time and excuse to play with an XMOS board…

On 1 Oct 2012, at 18:41, David May wrote:

Dear all, 

I think prioritisation is something we can now mostly 
avoid having to think about.

Processors are almost free. When I checked how many 1980s
transputers would fit on a single chip today - the answer is about
4000.  

And the XMOS chips (www.xmos.com) are being used for hard
real time applications - they have no prioritisation. Instead they 
have multiple cores with time-deterministic multi-threading.

One of the main points of the presentation I circulated is that
we can now do the same for interconnect. 

Best wishes

David 


Ian East
Open Channel Publishing Ltd.
(Reg. in England, Company Number 6818450)