[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: concurrency research "hot" again?



Maybe I'm sawing the same old violin, but...

I think the key to breaking out of the "incredibly difficult to program in
parallel" conundrum is to dump the baggage of the last couple of decades
and go back, not only to CSP, but also to elegant (small) OS constructs.
If OS size is in kilobytes, there's hope you can understand COMPLETELY
what it is doing, especially if the OS restricts itself to resource
loading and leaves run-time concurrency to applications.

The other thing is to accept a 5 or 10 percent performance hit in order to
keep clear, provable, traceable resource usage (i.e. eliminate spaghetti).
The "hit" is actually not a hit, because the cost of code tangles is
really much more; but if raw specs are applied, you can always do it just
a little faster by letting pointers and dynamic constructs go wild.

Larry Dickson

> Last week I attended a presentation by BillG where he also raised the
> topic of insufficient semantic richness in today's programming models -
> saying new developments are needed in programming languages to use the
> parallelism of multi-core CPU designs. In the same conference, the head of
> MS Research also talked about these challenges.
>
> About 6 months ago I sat through a presentation about options for
> parallelism in .NET today, and it wasn't pretty - way too much locking
> litter and thread invocation for my liking. Having to understand the
> behaviour of the compiler so that your process control statements don't
> get optimised away, isn't goodness.
>
> There is a proposed MS approach which seems to be a form of 'CPU
> transaction', where entire blocks of statements effectively compete for
> resources and the OS or hardware detects a livelock or deadlock or other
> problematic condition. At this point, blocks of process state are reversed
> by hardware. I need to find out more about this. These techniques will
> probably need to exist if you want to build a robust OS on top of
> multicore, where applications with different "parallel heritage" must run
> together. Nonetheless, the best approach for app construction is to start
> along CSP lines, not to rely on the system to reverse out of trouble...
>
>
> -----Original Message-----
> From: owner-occam-com@xxxxxxxxxx [mailto:owner-occam-com@xxxxxxxxxx] On
> Behalf Of Allan McInnes
> Sent: Wednesday, 14 February 2007 1:25 PM
> To: occam list
> Subject: concurrency research "hot" again?
>
> It seems that concurrency is again getting "mainstream" attention. I've
> seen
> several articles in the popular press over the last few days touting
> Intel's
> new 80-core "teraflop-on-a-chip" demonstration chip. Most of the articles
> I've
> seen have made a big deal out of how difficult programmers will find it to
> program for 80 cores, and how lots of research needs to be done to develop
> new
> techniques for programming parallel architectures (here's one sample of
> the
> articles I've seen:
> http://www.crn.com/sections/breakingnews/dailyarchives.jhtml?articleId=197005746).
>
> At the same time, I've seen several links to "The Landscape of Parallel
> Computing Research: A View from Berkeley"
> (http://www.eecs.berkeley.edu/Pubs/TechRpts/2006/EECS-2006-183.html) show
> up on
> various websites that I check regularly. In that report, the folks from
> Berkeley
> say, among other things:
>
> "Since real world applications are naturally parallel and hardware is
> naturally
> parallel, what we need is a programming model, system software, and a
> supporting architecture that are naturally parallel. Researchers have the
> rare
> opportunity to re-invent these cornerstones of computing, provided they
> simplify the efficient programming of highly parallel systems."
>
> So is research into concurrent programming becoming a hot topic again? And
> how
> many of these research efforts are simply going to reinvent the occam
> wheel?
> The Berkeley effort, in particular, sounds a lot like the occam/transputer
> approach (at least at a high level). However, the tech report in question
> makes
> no mention of CSP, occam, or transputers (OTOH, they also omit any mention
> of
> Berkeley's Prof. Ed Lee, who has done a lot of work on concurrent
> programming
> models via the Ptolemy project).
>
> It'll be interesting to see where this goes. Hopefully it'll lead to an
> upswing
> in funding for projects that can claim to be working towards support for
> massive  concurrency - like KRoC/nocc :-)
>
> Allan
> --
> Allan McInnes <amcinnes@xxxxxxxxxx>
> PhD Candidate
> Dept. of Electrical and Computer Engineering
> Utah State University
>
>
>