Einstein: Got server request to delete file XXXXXX
Einstein: Got server request to delete file XXXXXX
| Einstein@Home uses a BOINC feature, not used ( to date ) by other projects, called locality scheduling. The thrust of the concept is to try to reduce overall bandwidth usage by avoiding unnecessary downloads. This is contributor friendly as bandwidth cost and availability varies widely worldwide, so we seek to be inclusive at the low end of capacity with this tactic. Scheduling is tuned to a per-machine basis i.e. this is the 'locality' part of the phrase. Here's an apprentice baker's tour :
In essence you get given an unsliced loaf of bread ( data ), then a sequence of instructions in the workunits to slice it up, and one returns the slices as you go. Worldwide there are quite a number of distinct loaves about, a constantly varying pool as the project tasks are met. You'll be sharing workunits with other holders (your wingmen) of identical/cloned loaves. The wingman idea is to reduce the impact of instance errors by duplicating processing. A quorum is at least you and your wingman, but may be more for a given slice/task depending on how things go. (We used to have minimum quorums of three and four when I first turned up, presumably experience has shown fewer is OK.)
Now when your BOINC client reports in with finished work and is requesting new work, there's a bit of a conversation along the lines of 'what have you got for me in terms of the particular loaf I already have?'. Here is the logical point at which bandwidth saving is mainly enacted. The scheduler will look at what still needs to be done to that specific loaf, and attempt to issue further work for you with what you already have on hand.
Eventually loaves get fully sliced and need replacing ..... as indicated the BOINC client sorts this out by itself, once told that a given loaf is no longer needed. Effectively the E@H scheduler is saying that it wont be expecting to you to have that loaf any more, because in it's view of the total project workflows that particular loaf has gone stale. :-)
Quite a clever system really, certainly a handy option in the BOINC framework, but not every project will either need or use this locality scheduling. Hat's off to whom-so-ever thought that up and implemented it! :-)
Strictly speaking the language is that : a 'workunit' is a particular conceptual slice of bread from the one conceptual loaf within the project's data breadbin. However as the loaves are cloned to many machines, then thus sliced on each machine, that implies there exists clones of the slices too. Such clones are called 'tasks'. The scheduler looks at the reported/completed tasks at hand, and being it's job to keep track of what was sent to whom, it can thus answer the 'what to do next with what?' question. I suppose, in an perfect world with machine's matched for mojo etc, you'd only need to clone a loaf twice with these matched machines being wingmen for the entire loaf.
(Source)
Original writer | Original FAQ | Date & time |
---|---|---|
Mike Hewson | 581 | 04-06-2011 22:07:51 |