It's not clear what people mean when they refer to "state machine", as they never explain what the opposite is. So I'll talk about it both generally and specifically in terms of OpenGL "state machine" vs. current D3D "not a state machine".
Both OpenGL and Direct3D use global state. Every rendering command requires that you set a bunch of state.
In order to render in both APIs, you must set a bunch of global state. You have to set which shaders you're going to use. You have to set the current parameters you want to use those uniforms with. If you're using textures, you need to set those up. You need to set your current viewport parameters. And so forth.
The reason for this kind of "state machine" is simple: that's how the hardware generally works.
Each bit of state represents some registers in the GPU. Those registers are state. Shaders need to be loaded in order to render. You need to set the viewport registers. You need to set up which texture addressing registers you're using. And so forth. Thus, the APIs are state machines because the GPU is a state machine.
You could imagine that such would be done by a rendering command. But just look at how many objects you'd need to pass. You'd have to pass a bunch of shaders, a bunch of textures, your vertex data, your framebuffer, your viewport settings, your blend settings, etc.
So instead, the APIs have you do what the GPU does. You set all this stuff up beforehand.
Plus, this makes the API faster. Why? Because now the API knows what state you're using. If you want to render, say, different parts of one mesh with different textures, you can keep the framebuffer, viewport, vertex data, etc all the same. The only thing you change between them is which textures you use.
If you used some kind of giant Draw call with dozens of parameters, now the API has to look at each parameter to see if it's the same as your last draw call. And if it's not, it has to update the GPU registers.
Now, as for the differences between OpenGL and D3D. In this case, the difference in question is how they treat objects.
D3D is object-based, in that functions that modify objects take the object as a parameter. Also, most D3D objects are immutable; once you create them, you can't change most of their settings. Once you create a texture of a certain size, format, etc, it's done. You can't reallocate it with a different size/format/etc without deleting the object and creating a new one.
OpenGL is state-based. What this means is that OpenGL functions that modify objects (for the most part) do not take the object they operate on as parameters.
This is not a "design" so much as simply OpenGL's strict adherence to backwards compatibility. Objects in OpenGL are just fragments of global state; that's how they're defined. Why?
Because originally, back in OpenGL 1.0, there were no objects (besides display lists). Yes, not even texture objects. When they decided that this was stupid and that they needed objects, they decided to implement them in a backwards compatible way. Everyone was already using functions that operated on global state. So they just said that by binding an object, you override the global state. Those functions that used to change global state now change the object's state.
In this way, they could introduce objects into the API without also introducing a bunch of new functions that only work with objects. Thus, code that worked before could work with objects with only very minor tweaking, rather than forcing a non-trivial rewrite of the code. It also means that if they needed to introduce new functions that poke at textures, they would work with and without objects. Thus, it was both backwards and forwards compatible.
Most OpenGL objects work this way: if you want to change them, you have to bind them, then modify the "global" state.