What exactly is RESTful programming?
The answer is very simple, there is a dissertation written by Roy Fielding.]1 In that dissertation he defines the REST principles. If an application fulfills all of those principles, then that is a REST application.
The term RESTful was created because ppl exhausted the word REST by calling their non-REST application as REST. After that the term RESTful was exhausted as well. Nowadays we are talking about Web APIs and Hypermedia APIs, because the most of the so called REST applications did not fulfill the HATEOAS part of the uniform interface constraint.
The REST constraints are the following:
client-server architecture
So it does not work with for example PUB/SUB sockets, it is based on REQ/REP.
stateless communication
So the server does not maintain the states of the clients. This means that you cannot use server a side session storage and you have to authenticate every request. Your clients possibly send basic auth headers through an encrypted connection. (By large applications it is hard to maintain many sessions.)
usage of cache if you can
So you don't have to serve the same requests again and again.
uniform interface as common contract between client and server
The contract between the client and the server is not maintained by the server. In other words the client must be decoupled from the implementation of the service. You can reach this state by using standard solutions, like the IRI (URI) standard to identify resources, the HTTP standard to exchange messages, standard MIME types to describe the body serialization format, metadata (possibly RDF vocabs, microformats, etc.) to describe the semantics of different parts of the message body. To decouple the IRI structure from the client, you have to send hyperlinks to the clients in hypermedia formats like (HTML, JSON-LD, HAL, etc.). So a client can use the metadata (possibly link relations, RDF vocabs) assigned to the hyperlinks to navigate the state machine of the application through the proper state transitions in order to achieve its current goal.
For example when a client wants to send an order to a webshop, then it have to check the hyperlinks in the responses sent by the webshop. By checking the links it founds one described with the http://schema.org/OrderAction. The client know the schema.org vocab, so it understands that by activating this hyperlink it will send the order. So it activates the hyperlink and sends a POST https://example.com/api/v1/order
message with the proper body. After that the service processes the message and responds with the result having the proper HTTP status header, for example 201 - created
by success. To annotate messages with detailed metadata the standard solution to use an RDF format, for example JSON-LD with a REST vocab, for example Hydra and domain specific vocabs like schema.org or any other linked data vocab and maybe a custom application specific vocab if needed. Now this is not easy, that's why most ppl use HAL and other simple formats which usually provide only a REST vocab, but no linked data support.
build a layered system to increase scalability
The REST system is composed of hierarchical layers. Each layer contains components which use the services of components which are in the next layer below. So you can add new layers and components effortless.
For example there is a client layer which contains the clients and below that there is a service layer which contains a single service. Now you can add a client side cache between them. After that you can add another service instance and a load balancer, and so on... The client code and the service code won't change.
code on demand to extend client functionality
This constraint is optional. For example you can send a parser for a specific media type to the client, and so on... In order to do this you might need a standard plugin loader system in the client, or your client will be coupled to the plugin loader solution.
REST constraints result a highly scalable system in where the clients are decoupled from the implementations of the services. So the clients can be reusable, general just like the browsers on the web. The clients and the services share the same standards and vocabs, so they can understand each other despite the fact that the client does not know the implementation details of the service. This makes possible to create automated clients which can find and utilize REST services to achieve their goals. In long term these clients can communicate to each other and trust each other with tasks, just like humans do. If we add learning patterns to such clients, then the result will be one or more AI using the web of machines instead of a single server park. So at the end the dream of Berners Lee: the semantic web and the artificial intelligence will be reality. So in 2030 we end up terminated by the Skynet. Until then ... ;-)