I am new to go and I am trying to create a simple chat server where clients can broadcast messages to all connected clients.
In my server, I have a goroutine (infin
A more elegant solution is a "broker", where clients may subscribe and unsubscibe to messages.
To also handle subscribing and unsubscribing elegantly, we may utilize channels for this, so the main loop of the broker which receives and distributes the messages can incorporate all these using a single select
statement, and synchronization is given from the solution's nature.
Another trick is to store the subscribers in a map, mapping from the channel we use to distribute messages to them. So use the channel as the key in the map, and then adding and removing the clients is "dead" simple. This is made possible because channel values are comparable, and their comparison is very efficient as channel values are simple pointers to channel descriptors.
Without further ado, here's a simple broker implementation:
type Broker struct {
stopCh chan struct{}
publishCh chan interface{}
subCh chan chan interface{}
unsubCh chan chan interface{}
}
func NewBroker() *Broker {
return &Broker{
stopCh: make(chan struct{}),
publishCh: make(chan interface{}, 1),
subCh: make(chan chan interface{}, 1),
unsubCh: make(chan chan interface{}, 1),
}
}
func (b *Broker) Start() {
subs := map[chan interface{}]struct{}{}
for {
select {
case <-b.stopCh:
return
case msgCh := <-b.subCh:
subs[msgCh] = struct{}{}
case msgCh := <-b.unsubCh:
delete(subs, msgCh)
case msg := <-b.publishCh:
for msgCh := range subs {
// msgCh is buffered, use non-blocking send to protect the broker:
select {
case msgCh <- msg:
default:
}
}
}
}
}
func (b *Broker) Stop() {
close(b.stopCh)
}
func (b *Broker) Subscribe() chan interface{} {
msgCh := make(chan interface{}, 5)
b.subCh <- msgCh
return msgCh
}
func (b *Broker) Unsubscribe(msgCh chan interface{}) {
b.unsubCh <- msgCh
}
func (b *Broker) Publish(msg interface{}) {
b.publishCh <- msg
}
Example using it:
func main() {
// Create and start a broker:
b := NewBroker()
go b.Start()
// Create and subscribe 3 clients:
clientFunc := func(id int) {
msgCh := b.Subscribe()
for {
fmt.Printf("Client %d got message: %v\n", id, <-msgCh)
}
}
for i := 0; i < 3; i++ {
go clientFunc(i)
}
// Start publishing messages:
go func() {
for msgId := 0; ; msgId++ {
b.Publish(fmt.Sprintf("msg#%d", msgId))
time.Sleep(300 * time.Millisecond)
}
}()
time.Sleep(time.Second)
}
Output of the above will be (try it on the Go Playground):
Client 2 got message: msg#0
Client 0 got message: msg#0
Client 1 got message: msg#0
Client 2 got message: msg#1
Client 0 got message: msg#1
Client 1 got message: msg#1
Client 1 got message: msg#2
Client 2 got message: msg#2
Client 0 got message: msg#2
Client 2 got message: msg#3
Client 0 got message: msg#3
Client 1 got message: msg#3
You may consider the following improvements. These may or may not be useful depending on how / to what you use the broker.
Broker.Unsubscribe()
may close the message channel, signalling that no more messages will be sent on it:
func (b *Broker) Unsubscribe(msgCh chan interface{}) {
b.unsubCh <- msgCh
close(msgCh)
}
This would allow clients to range
over the message channel, like this:
msgCh := b.Subscribe()
for msg := range msgCh {
fmt.Printf("Client %d got message: %v\n", id, msg)
}
Then if someone unsubscribes this msgCh
like this:
b.Unsubscribe(msgCh)
The above range loop will terminate after processing all messages that were sent before the call to Unsubscribe()
.
If you want your clients to rely on the message channel being closed, and the broker's lifetime is narrower than your app's lifetime, then you could also close all subscribed clients when the broker is stopped, in the Start()
method like this:
case <-b.stopCh:
for msgCh := range subs {
close(msgCh)
}
return
This is a late answer but I think it may appease some curious readers.
Go channels are widely welcomed to be used when it comes to concurrency.
Go community is rigid to follow this saying:
Do not communicate by sharing memory; instead, share memory by communicating.
I am completely neutral toward this and I think other options rather than well-defined channels
should be considered when it comes to broadcasting.
Here is my take: Cond from sync packages are widely overlooked. Implementing braodcaster as suggested by Bronze man in very same context worths noting.
I was delighted witch icza suggestion to use channels and broadcast messages over them. I follow the same methods and use sync's conditional variable:
// Broadcaster is the struct which encompasses broadcasting
type Broadcaster struct {
cond *sync.Cond
subscribers map[interface{}]func(interface{})
message interface{}
running bool
}
this is the main struct that our whole broadcasting concept relies on.
Below, I define some behaviours for this struct. In a nutshell, subscribers should be able to be added, removed and whole the process should be revokable.
// SetupBroadcaster gives the broadcaster object to be used further in messaging
func SetupBroadcaster() *Broadcaster {
return &Broadcaster{
cond: sync.NewCond(&sync.RWMutex{}),
subscribers: map[interface{}]func(interface{}){},
}
}
// Subscribe let others enroll in broadcast event!
func (b *Broadcaster) Subscribe(id interface{}, f func(input interface{})) {
b.subscribers[id] = f
}
// Unsubscribe stop receiving broadcasting
func (b *Broadcaster) Unsubscribe(id interface{}) {
b.cond.L.Lock()
delete(b.subscribers, id)
b.cond.L.Unlock()
}
// Publish publishes the message
func (b *Broadcaster) Publish(message interface{}) {
go func() {
b.cond.L.Lock()
b.message = message
b.cond.Broadcast()
b.cond.L.Unlock()
}()
}
// Start the main broadcasting event
func (b *Broadcaster) Start() {
b.running = true
for b.running {
b.cond.L.Lock()
b.cond.Wait()
go func() {
for _, f := range b.subscribers {
f(b.message) // publishes the message
}
}()
b.cond.L.Unlock()
}
}
// Stop broadcasting event
func (b *Broadcaster) Stop() {
b.running = false
}
Next, I can use it quite easily:
messageToaster := func(message interface{}) {
fmt.Printf("[New Message]: %v\n", message)
}
unwillingReceiver := func(message interface{}) {
fmt.Println("Do not disturb!")
}
broadcaster := SetupBroadcaster()
broadcaster.Subscribe(1, messageToaster)
broadcaster.Subscribe(2, messageToaster)
broadcaster.Subscribe(3, unwillingReceiver)
go broadcaster.Start()
broadcaster.Publish("Hello!")
time.Sleep(time.Second)
broadcaster.Unsubscribe(3)
broadcaster.Publish("Goodbye!")
It should print something like this in any order:
[New Message]: Hello!
Do not disturb!
[New Message]: Hello!
[New Message]: Goodbye!
[New Message]: Goodbye!
See this on go playground
I created a simple library to support the channel broadcasting. This can be used to broadcast async result from many modern service API calls. https://github.com/guiguan/caster#broadcast-a-go-channel
What you are doing is a fan out pattern, that is to say, multiple endpoints are listening to a single input source. The result of this pattern is, only one of these listeners will be able to get the message whenever there's a message in the input source. The only exception is a close
of channel. This close
will be recognized by all of the listeners, and thus a "broadcast".
But what you want to do is broadcasting a message read from connection, so we could do something like this:
Let each worker listen to dedicated broadcast channel, and dispatch the message from the main channel to each dedicated broadcast channel.
type worker struct {
source chan interface{}
quit chan struct{}
}
func (w *worker) Start() {
w.source = make(chan interface{}, 10) // some buffer size to avoid blocking
go func() {
for {
select {
case msg := <-w.source
// do something with msg
case <-quit: // will explain this in the last section
return
}
}
}()
}
And then we could have a bunch of workers:
workers := []*worker{&worker{}, &worker{}}
for _, worker := range workers { worker.Start() }
Then start our listener:
go func() {
for {
conn, _ := listener.Accept()
ch <- conn
}
}()
And a dispatcher:
go func() {
for {
msg := <- ch
for _, worker := workers {
worker.source <- msg
}
}
}()
In this case, the solution given above still works. The only difference is, whenever you need a new worker, you need to create a new worker, start it up, and then push it into workers
slice. But this method requires a thread-safe slice, which need a lock around it. One of the implementation may look like as follows:
type threadSafeSlice struct {
sync.Mutex
workers []*worker
}
func (slice *threadSafeSlice) Push(w *worker) {
slice.Lock()
defer slice.Unlock()
workers = append(workers, w)
}
func (slice *threadSafeSlice) Iter(routine func(*worker)) {
slice.Lock()
defer slice.Unlock()
for _, worker := range workers {
routine(worker)
}
}
Whenever you want to start a worker:
w := &worker{}
w.Start()
threadSafeSlice.Push(w)
And your dispatcher will be changed to:
go func() {
for {
msg := <- ch
threadSafeSlice.Iter(func(w *worker) { w.source <- msg })
}
}()
One of the good practices is: never leave a dangling goroutine. So when you finished listening, you need to close all of the goroutines you fired. This will be done via quit
channel in worker
:
First we need to create a global quit
signalling channel:
globalQuit := make(chan struct{})
And whenever we create a worker, we assign the globalQuit
channel to it as its quit signal:
worker.quit = globalQuit
Then when we want to shutdown all workers, we simply do:
close(globalQuit)
Since close
will be recognized by all listening goroutines (this is the point you understood), all goroutines will be returned. Remember to close your dispatcher routine as well, but I will leave it to you :)
Broadcast to a slice of channel and use sync.Mutex to manage channel add and remove may be the easiest way in your case.
Here is what you can do to broadcast
in golang:
Because Go channels follow the Communicating Sequential Processes (CSP) pattern, channels are a point-to-point communication entity. There is always one writer and one reader involved in each exchange.
However, each channel end can be shared amongst multiple goroutines. This is safe to do - there is no dangerous race condition.
So there can be multiple writers sharing the writing end. And/or there can be multiple readers sharing the reading end. I wrote more on this in a different answer, which includes examples.
If you really need a broadcast, you cannot do this directly, but it is not hard to implement an intermediate goroutine that copies a value out to each of a group of output channels.