[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: CommsTime times?



Roger:

> ...  Of course, none of these are then directly comparable with
> Peter's figures, but some of his context-switch assumptions fall apart
> when we use parallel hardware rather than a context-switching CPU anyway!

I was trying not to stir up you FPGA guys ;-) ... my points assumed:

>   1. commstime, running as compiled code for a general-purpose processor,

Nevertheless, it's worth being reminded that things are different if we
implement this stuff directly in silicon and, in particular, that Barry
and Roger's compiler generates next-to-zero overheads for a two-branch
PAR output!

Finding a meaningful benchmark to compare hard and soft implementation
overheads is tricky.  The other benchmark I mentioned - the one that
forces cache misses - has a million processes, grouped in independent
pairs with each pair communicating away like mad.  The software kernel
round-robins all the process contexts and, with a million to go through,
ensures that nothing is in cache for each process as it gets scheduled!
A hardware implementation, however, would give the same time whether
there was one pair of processes or 500000.  I guess we need to get to
real applications to compare hard and soft implementations.

Gerald mentioned:

> Another remark: The 1.9 us/context-switch time is not exactly one
> context-switch. This value represents the performance of the channel
> (synchronization, queuing and context-switching). This value is much more
> interesting than the bare context-switch time.

Yes.  The figures I quoted as "context-switch" times in my posting about
KRoC/JCSP commstimes also include the full channel semantics.

Gerald: you said "with 5 switches per cycle" for commstime.  But there
are 4 communications per cycle - so I can understand that causing 4 or 8
switches ... ?

Cheers,

Peter.