i come from a java background where there were synchronized blocks:
The \"Synchronized\" keywords prevents concurrent access to a block of code or o
Singletons and shared classes do exist in PHP.
But PHP is short-lived. For every HTTP request, a new process is instantiated, and your script executed. This means that a synchronized
kayword does not help in your case.
Race conditions between multiple requests are avoided by application design.
For exemple, in your SQL query, you would do things like count = count + 1
instead of reading count
first, then increasing it by one, and then writing. This is not that different then some mechanics you would employ in Java for concurrency (synchronized
is often the worst cure for concurrency problems).
For databases, you can lock tables, and use transactions to ensure integrity over multiple queries.
I give you this example: paypal sometimes sends after each payment a confirmation TWICE at the exact same time. Now i do basically if($confirmed && doesNotExistYet()){ $c = new Customer(); $c->save();}
Use UNIQUE constraints on your database. Or lock the table. Or else. You have options - but a synchronized
keyword would not solve it, because those are different processes.
A word on singletons: Java ensures that one singleton only exists once per process. PHP does the same. If you run the same JAR 2x you get two Java processes, each with its own instance of the singleton.
PHP is designed to be shortlived - a PHP application is born and killed with an HTTP request.
Every time your webserver receives an HTTP request, it will invoke a new instance of your application. This instance lives in it's own process with it's own state.
This means that one request process cannot modify the state of another request process, but it also means that you don't have a built-in way to prevent two processes from executing the same code.
This means you have to design your application around this constraint.
Other people answer with lots of words, but no code. Why don't you try this?
function synchronized($handler)
{
$name = md5(json_encode(debug_backtrace(DEBUG_BACKTRACE_IGNORE_ARGS,1)[0]));
$filename = sys_get_temp_dir().'/'.$name.'.lock';
$file = fopen($filename, 'w');
if ($file === false) {
return false;
}
$lock = flock($file, LOCK_EX);
if (!$lock) {
fclose($file);
return false;
}
$result = $handler();
flock($file, LOCK_UN);
fclose($file);
return $result;
}
function file_put_contents_atomic($filename, $string)
{
return synchronized(function() use ($filename,$string) {
$tempfile = $filename . '.temp';
$result = file_put_contents($tempfile, $string);
$result = $result && rename($tempfile, $filename);
return $result;
});
}
The above code does an atomic file_put_contents
using a custom "synchronized" function.
Source: https://tqdev.com/2018-java-synchronized-block-in-php
The short answer to your question is No, nothing like this exists for PHP because, as you point out PHP is does not run multi-threaded processes. PHP 7 is the same in this respect as previous PHP versions.
Now you describe the lack of multithreading as a major disadvantage. This isn't necessarily the case: it's a major difference. Whether it's a disadvantage or not is another question. That depends very much on context.
The problem you describe is of having shared object between processes. A shared object is a non-sequitur for PHP without the multi-threading, but the main point of a shared object is to share the data within the object.
If we're talking about shared data, you're right that a DB or file is a common way to do that, and is usually sufficient in terms of performance, but if you really need more performance you can genuinely share the data in memory by using something like Memcache. There are well-established libraries for dealing with memcache in PHP. This would be the normal PHP solution.
Now there are two other things I'd like to raise here that may be relevant here.
Firstly, let me add NodeJS to the equation, because this does things differently again. In NodeJS, the system is also single threaded, but unlike PHP which starts a new process for each request, in NodeJS all requests are fed into a single constantly-running thread. This means that in NodeJS, even though it's single-threaded, you can have global data (commonly the DB connection) that is shared between requests, because they're all running in the same process.
The point here is that being single-threaded isn't the reason why PHP can't share data between requests; it's more about the fact that in PHP each request is isolated from the others in its own process. Far from being a disadvantage, this can actually be an advantage -- for example, a PHP crash won't take down your entire site, where it could do in a multi-threaded or shared thread environment. In fact this is one of NodeJS's biggest weaknesses: it's quite easy for a single bit of poorly written code to make the server completely unresponsive.
The second thing I wanted to raise is that there are in fact experimental branches of PHP that do in fact allow you to use the language for both multi-threading and shared-thread environments. In both cases, these are very very experimental, and certainly should not be used for production. As you note in the question, the language is missing key features that would be necessary for use in these environments. But the fact is that it can be done.
Neither of these experiments is ever going to actually go anywhere because existing PHP code is not written with these kinds of environments in mind. You'll already know what happens if you write a multi-threaded Java program without protecting your shared data, so it should be clear that if PHP were ever to seriously entertain running in platforms with shared data then anyone wanting to use it with existing PHP code would need to do extensive rewriting. That's not just going to happen, so it's safe to say that PHP will stick with it's current format with isolated processes.