I\'m currently working on a large project, which requires server-sent events implementation. I\'ve decided to use event-source transport for it, and started from simple chat
Pretty much the only way of doing it unless you put the refresh timer in the client side and use the server side as web services only. Load will be high with that amount of users but your limited by doing a pure php only solution I'd rather look at a c/c++ daemon on the server and raw sockets
memcached as a temp storage then a back end process to commit the archive hourly / minutely whatever to the mysql db
yes but depends how much hardware you're willing to throw at the solution or how confident you are at setting up something such as master-slave replication using one read and one write db
Hope that helps
Will all 1000+ users be connected simultaneously? And are you using Apache with PHP? If so, I think the thing you should really be concerned about is memory: each user is holding open a socket, an Apache process, and a PHP instance. You'll need to measure yourself, for your own setup, but if we say 20MB each, that is 20GB of memory for 1000 users. If you tighten things so each process is 12MB that is still 12GB per 1000 users. (A m2.xlarge EC2 instance has 17GB of memory, so if you budget one of those per 500-1000 users I think you will be okay.)
In contrast, with your 10 second poll time, CPU usage is very low. For the same reason, I would not imagine polling the MySQL DB will be the bottleneck, but at that level of use I would consider having each DB write also do a write to memcached. Basically, if you don't mind throwing a bit of hardware at it, your approach looks doable. It is not the most efficient use of memory, but if you are familiar with PHP it will probably be the most efficient use of programmer time.
UPDATE: Just saw OP's comment and realized that was usleep(10000)
is 0.01s, not 10s. Oops! That changes everything:
I'd use the queue service instead instead of memcached, and you could either find something off the shelf, or write something custom in PHP fairly easily. You can still keep MySQL as the main DB and have your queue service poll MySQL; the difference here is you only have one process polling it intensively, not one thousand. The queue service is a simple socket server, that accepts a connection from each of your front-facing PHP scripts. Each time its polling finds a new message, it broadcasts that to all the clients that have connected to it. (There are different ways to architect it, but I hope that gives you the general idea.)
Over on the front-facing PHP script, you use a socket_select()
call with a 15-second timeout. It only wakes up when there is no data, so is using zero CPU the rest of the time. (The 15-second timeout is so you can send SSE keep-alives.)
(Source for the 20MB and 12MB figures)