Roslyn: Parallel programming feature without blocks

Created on 26 Apr 2016  Â·  7Comments  Â·  Source: dotnet/roslyn

What if you do parallel in this form: enter image description here
e1zkb 1

The idea is that all methods can be run in parallel without any lock, mutex ... and without think about it

Given (collcted in compile time):

  • General list of changabled objects (GLCO)
  • Job1 with its list of changabled objects (c LCO)
  • Job2 with its list of changabled objects (c LCO)
  • When you start Job1, recorded in the GLCO <- LCO of Job1 (with submethods)
  • When you start Job2, the LCO of Job2(with submethods) is compared with GLCO.
  • If there is no match, then run Job2, if so, when waiting for a match will be removed from GLCO(ie Job1 end)

Example for end client (no any blocks):

BMW.Main()
{
 Job job2 = parallel BMW.Method1()
 ...
}
Job job1 = parallel BMW.Main() 

I want get your opinion about this idea.

Area-Language Design Discussion

Most helpful comment

I don't understand your suggestion well enough to comment.

(oops, I just commented)

All 7 comments

I don't understand your suggestion well enough to comment.

(oops, I just commented)

Now user must write
lock(resource)
{
.. manipulations
}
for safety make async operations.
In my case, user does not care about this. He only add "parallel" keyword before any method that must run asynchronously.
Compiler himself protect collized methods

So, what you want is to have a separate lock for every object and then for the compiler to automatically figure out the set of locks that's required to execute a method?

How exactly is the compiler going to figure out at compile time which objects are accessed at run time?

And even if doing that was possible, I don't think this kind of super-fine-grained locking is generally a good idea, since it would have a lot of overheard (even if you managed to implement it more efficiently than lock).

no. Compiller will not add locks. He will generate LCO for all methods in compile time (maybe into metadata)

Example of work:

Definition:

Method1() { a = 10; b = 20} //LCO = a, b
Method2() { a = 10; Method3();} //LCO = a, c(LCO of method3)
Method3() {c = 30}; //LCO = c

UserCode:

Method1();
parallel Method2();

Runtime:

Method1();
RunWhenFree(Method2, method2LCO); //becouse parallel keyword

Task RunWhenFree(Action action, LCO actionLCO)
{
   var result = Task.Run(() =>
    {
        while(!IsCanWork(actionLCO))
        {
           WaitOrSleep
        }
        GLCO.Add(actionLCO);
        action();
        GLCO.Remove(actionLCO);
     }
    return result;
);

}
bool IsCanWork(LCO lco)
{
    return !GLCO.ContainAnyOf(lco);
}

How would that wok for all but the most simple methods? For example, consider the following code:

``` c#
void TryIncreaseSalary(Employee employee)
{
if (employee.CanIncreaseSalary)
{
e.Salary += 10000;
e.Department.TotalSalaries += 10000;
}
}

…

var dept1 = new Department();
var dept2 = new Department();

var alice = new Employee("Alice", dept1, 50000, true);
var bob = new Employee("Bob", dept1, 50000, true);
var charlie = new Employee("Charlie", dept2, 50000, false);

parallel TryInsteaseSalary(alice); // LCO = alice, dept1
parallel TryInsteaseSalary(bob); // LCO = bob, dept1
parallel TryInsteaseSalary(charlie); // LCO = charlie
```

How would the compiler figure out the correct LCOs? And this is still pretty simple code, it could be much more complicated than that. (And ultimately, computing all objects accessed by a method would require solving the halting problem.)

If you're not proposing anything complicated like this, then I don't think that would be a very useful feature.

Not to mention automatically wrapping such methods in Task and having some blocking synchronization loop would add so much overhead that I can't even imagine scenarios where it would prove useful. Threadpool exhaustion and deadlocking would be real concerns.

svick: You're right. I forgot about the arguments. Thanks all.

Was this page helpful?
0 / 5 - 0 ratings