Bit of a strange set up but have come across a project where Composer has been used in a local environment to get a project started. The original developer did not have ssh acce
The usual workflow would be:
composer require new/package
.There may be several exceptions from this general workflow:
ad 1: If there is no version control, you'd probably better of starting a local git repo right now, and download the current production state into it as the first commit. Not having version control will make things harder, especially going back to known working versions. And because the files on the production server are probably unmanaged, you'd also check in the vendor
folder into your newly created version control just to avoid canceling any changes that had been made to these files.
ad 2: Manually editing the composer.json
file sometimes is a faster way to get what you want if you know what you are doing, but you'd have to correctly edit the JSON. For me it usually is too much hassle if I already have a command line ready. The command will also select a matching version that fits into the already installed dependencies. Manual editing may lead to version conflicts that you'd have to untangle. Remember to only install dependencies that work with the PHP version in production. You probably should run composer config platform.php X.Y.Z
in order to add the production version of PHP into the composer.json
file, which prevents Composer from installing dependency versions based on your development PHP. Adding the -g
switch will add this setting to your global (user) setting instead, which will affect all composer operations you start, also for other projects.
ad 3: Manual editing will require you to run composer update
on the command line, so there's probably no reason to not do composer require
instead.
ad 4: How this could be done is entirely dependent on what environment you have to work with.
ad 5: At this stage you have assembled all files necessary to create a working website. Uploading them to production will always result in a working website unless the upload fails somehow. You could also use some "upload first to temporary folder, then move on the server" approach if you fear FTP would be unreliable. Some people take a different approach: They have a git repository on the production server and they simply push the version that should go live onto that remote repo. Some post-push scripts will run composer install
then. This automated approach will also work (but not using FTP), but has the higher risk of something failing during deployment, and probably has no easy way back to the previous situation.
So in the end I'd say that uploading the whole folder structure via FTP (well, that protocol is insecure itself, better replace it with FTPS (FTP with SSL), SFTP or SCP) is better compared to running Composer on the production server.
Your specific question regarding which folders to upload: All of them. Especially upload the whole vendor
folder. It contains the current autoloader and all dependency packages the software needs. If you worked correctly, you downloaded the existing composer.json
and composer.lock
file together with everything else and added the new dependency to it. This has changed both these files, added the new package to the vendor
folder and the classes to the autoloader.
Don't fiddle with uploading only parts of the vendor
folder, or manually editing a component of the autoloading. You will only create surprises for the developer coming after you if you do some aspect incorrectly, and it also takes more time. Composer is a very good tool to manage dependencies - use it!