[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: OCCAM, JOYCE, SUPERPASCAL AND JAVA



Lawrence Dickson writes:
> All,
>    I haven't any access to Per Brinch Hansen's papers so I am working
> off your summaries. It LOOKS (correct me if I'm wrong) as if Prof
> Hansen's technique is equivalent to a pool of instances of each
> process - an instance gets deactivated at process termination but
> remains in existence to be reactivated at later need in an unrelated
> spawn. Using occam-style software-hardware analogy, this would be
> equivalent to addressable hardware modules, say "single level process
> nesters" and "single parenthesis parsers" wired together to make a
> parallel compiler. You could hot-add new hardware modules but not
> hot-remove them.
>    Possibly such a hardware analogy could clarify some of the issues
> to do with synchronization. Has Prof Hansen tried making a fine-grained
> parallel compiler using his techniques?

Yes. Superpascal. And you can download it as source code from his phb ftp area
at  "top.cis.syr.edu". I am not sure whether it is fine-grained.

I wondered if there is some property of his implementation (scheduler?) which
ensures that concurrent access to the "pool" of reusable workspaces of a 
particular process is mutually exclusive when there are several nested parallel
instances.

Otherwise, I don't see why there is any advantage over simply modifying "top" 
on each allocation and release in the obvious way.
Unless it is just that in most cases the contending parallel process will
have different indices. But that would need to be checked, which would be
expensive: maybe less expensive than a synchronisation? As I say,
I must be missing something. :-(

Adrian
-- 
A E Lawrence, MA., DPhil.  	adrian.lawrence@xxxxxxxxxxxxxx
MicroProcessor Unit, 13, Banbury Road, Oxford. OX2 6NN. UK.                
Voice: (+44)-1865-273274,  Fax: (+44)-1865-273275