Go’s Philosophy on Concurrency

Posted Dreamer who

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了Go’s Philosophy on Concurrency相关的知识,希望对你有一定的参考价值。

                      Go’s Philosophy on Concurrency

目录

                      Go’s Philosophy on Concurrency

Are you trying to transfer ownership of data?

Are you trying to guard internal state of a struct?

Are you trying to coordinate multiple pieces of logic?

Is it a performance-critical section?


CSP was and is a large part of what Go was designed around; however, Go also sup‐
ports more traditional means of writing concurrent code through memory access
synchronization and the primitives that follow that technique. Structs and methods in
the sync and other packages allow you to perform locks, create pools of resources,
preempt goroutines, and more.
This ability to choose between CSP primitives and memory access synchronizations
is great for you since it gives you a little more control over what style of concurrent
code you choose to write to solve problems, but it can also be a little confusing. New‐
comers to the language often get the impression that the CSP style of concurrency is

considered the one and only way to write concurrent code in Go. For instance, in the
documentation for the sync package, it says:


Package sync provides basic synchronization primitives such as mutual exclusion
locks. Other than the Once and WaitGroup types, most are intended for use by lowlevel library routines. Higher-level synchronization is better done via channels and
communication.


In the language FAQ, it says:


Regarding mutexes, the sync package implements them, but we hope Go programming
style will encourage people to try higher-level techniques. In particular, consider struc‐
turing your program so that only one goroutine at a time is ever responsible for a par‐
ticular piece of data.
Do not communicate by sharing memory. Instead, share memory by communicating.


There are also numerous articles, lectures, and interviews where various members of
the Go core team espouse the CSP style over primitives like sync.Mutex.
It is therefore completely understandable to be confused as to why the Go team chose
to expose memory access synchronization primitives at all. What may be even more
confusing is that you’ll see synchronization primitives commonly out in the wild, see
people complain about overuse of channels, and also hear some of the Go team mem‐
bers stating that it’s OK to use them. Here’s a quote from the Go Wiki on the matter:

 

One of Go’s mottos is “Share memory by communicating, don’t communicate by shar‐
ing memory.”
That said, Go does provide traditional locking mechanisms in the sync package. Most
locking issues can be solved using either channels or traditional locks.
So which should you use?
Use whichever is most expressive and/or most simple.

 

That’s good advice, and this is a guideline you often see when working with Go, but it
is a little vague. How do we understand what is more expressive and/or simpler?
What criteria can we use? Fortunately there are some guideposts we can use to help
us do the correct thing. As we’ll see, the way we can mostly differentiate comes from
where we’re trying to manage our concurrency: internally to a tight scope, or exter‐
nally throughout our system. Figure 2-1 enumerates these guideposts into a decision
tree.

 

Let’s step through these decision points one by one:


Are you trying to transfer ownership of data?


If you have a bit of code that produces a result and wants to share that result with
another bit of code, what you’re really doing is transferring ownership of that
data. If you’re familiar with the concept of memory-ownership in languages that
don’t support garbage collection, this is the same idea: data has an owner, and
one way to make concurrent programs safe is to ensure only one concurrent con‐
text has ownership of data at a time. Channels help us communicate this concept
by encoding that intent into the channel’s type.
One large benefit of doing so is you can create buffered channels to implement a
cheap in-memory queue and thus decouple your producer from your consumer.
Another is that by using channels, you’ve implicitly made your concurrent code
composable with other concurrent code.

 

Are you trying to guard internal state of a struct?


This is a great candidate for memory access synchronization primitives, and a
pretty strong indicator that you shouldn’t use channels. By using memory access
synchronization primitives, you can hide the implementation detail of locking
your critical section from your callers. Here’s a small example of a type that is
thread-safe, but doesn’t expose that complexity to its callers:

type Counter struct 
mu sync.Mutex
value int

func (c *Counter) Increment() 
c.mu.Lock()
defer c.mu.Unlock()
c.value++

If you recall the concept of atomicity, we can say that what we’ve done here is
defined the scope of atomicity for the Counter type. Calls to Increment can be
considered atomic.
Remember the key word here is internal. If you find yourself exposing locks
beyond a type, this should raise a red flag. Try to keep the locks constrained to a
small lexical scope.

 

Are you trying to coordinate multiple pieces of logic?


Remember that channels are inherently more composable than memory access
synchronization primitives. Having locks scattered throughout your object-graph
sounds like a nightmare, but having channels everywhere is expected and
encouraged! I can compose channels, but I can’t easily compose locks or methods
that return values.
You will find it much easier to control the emergent complexity that arises in
your software if you use channels because of Go’s select statement, and their
ability to serve as queues and be safely passed around. If you find yourself strug‐
gling to understand how your concurrent code works, why a deadlock or race is
occurring, and you’re using primitives, this is probably a good indicator that you
should switch to channels.

 

Is it a performance-critical section?


This absolutely does not mean, “I want my program to be performant, therefore I
will only use mutexes.” Rather, if you have a section of your program that you
have profiled, and it turns out to be a major bottleneck that is orders of magni‐
tude slower than the rest of the program, using memory access synchronization
primitives may help this critical section perform under load. This is because
channels use memory access synchronization to operate, therefore they can only
be slower. Before we even consider this, however, a performance-critical section
might be hinting that we need to restructure our program

 

Hopefully, this gives some clarity around whether to utilize CSP-style concurrency or
memory access synchronization. There are other patterns and practices that are use‐
ful in languages that use the OS thread as the means of abstracting concurrency. For
example, things like thread pools often come up. Because most of these abstractions
are targeted toward the strengths and weaknesses of OS threads, a good rule of thumb

when working with Go is to discard these patterns. That’s not to say they aren’t useful
at all, but the use cases are certainly much more constrained in Go. Stick to modeling
your problem space with goroutines, use them to represent the concurrent parts of
your workflow, and don’t be afraid to be liberal when starting them. You’re much
more likely to need to restructure your program than you are to begin running into
the upper limit of how many goroutines your hardware can support.
Go’s philosophy on concurrency can be summed up like this: aim for simplicity, use
channels when possible, and treat goroutines like a free resource.

 

以上是关于Go’s Philosophy on Concurrency的主要内容,如果未能解决你的问题,请参考以下文章

C++Philosophy

The history and design philosophy of Spring

华为云云原生数据库:A Philosophy about “less”

华为云云原生数据库:A Philosophy about “less”

Spring historydesign philosophy (Spring的历史及设计理念)

记一次获得 3 倍性能的 go 程序优化实践,及 on-cpu / off-cpu 火焰图的使用