I understand how a computer works on the basic principles, such as, a program can be written in a \"high\" level language like C#, C and then it\'s broken down in to object code
Intel and the x86 are big on reverse compatibility, which certainly helped them out but at the same time hurts greatly. The internals of the 8088/8086 to 286 to 386, to 486, pentium, pentium pro, etc to the present are somewhat of a redesign each time. Early on adding protection mechanisms for operating systems to protect apps from each other and the kernel, and then into performance by adding execution units, superscalar and all that comes with it, multi core processors, etc. What used to be a real, single AX register in the original processor turns into who knows how many different things in a modern processor. Originally your program was executed in the order written, today it is diced and sliced and executed in parallel in such a way that the intent of the instructions as presented are honored but the execution can be out of order and in parallel. Lots and lots of new tricks buried behind what on the surface appears to be a very old instruction set.
The instruction set changed from the 8/16 bit roots to 32 bit, to 64 bit, so the assembly language had to change as well. Adding EAX to AX, AH, and AL for example. Occasionally other instructions were added. But the original load, store, add, subtract, and, or, etc instructions are all there. I have not done x86 in a long time and was shocked to see that the syntax has changed and/or a particular assembler messed up the x86 syntax. There are a zillion tools out there so if one doesnt match the book or web page you are using, there is one out there that will.
So thinking in terms of assembly language for this family is right and wrong, the assembly language may have changed syntax and is not necessarily reverse compatible, but the instruction set or machine language or other similar terms (the opcodes/bits the assembly represents) would say that much of the original instruction set is still supported on modern x86 processors. 286 specific nuances may not work perhaps, as with other new features of specific generations, but the core instructions, load, store, add, subtract, push, pop, etc all still work and will continue to work. I feel it is better to "Drive down the center of the lane", dont get into chip or tool specific ghee whiz features, use the basic boring, been working since the beginning of time syntax of the language.
Because each generation in the family is trying for certain features, usually performance, the way the individual instructions are handed out to the various execution units changes...on each generation...In order to hand tune assembler for performance, trying to out-do a compiler, can be difficult at best. You need detailed knowledge about the specific processor you are tuning for. From the early x86 days to the present, unfortunately, what made the code execute faster on one chip, would often cause the next generation to run extra slow. Perhaps that was a marketing tool in disguise, not sure, "Buy the hot new processor that cost twice as much as the one you have now, advertises twice the clock speed, but runs your same copy of windows 30% slower. In a few years when the next version of windows is compiled (and this chip is obsolete) it will then double in performance". Another side effect of this is that at this point in time you cannot take one C program and create one binary that runs fast on all x86 processors, for performance you need to tune for the specific processor, meaning you need to at least tell the compiler to optimize and what family to optimize for. And like windows or office, or something you are distributing as a binary you likely cannot or do not want to somehow bury several differently tuned copies of the same program in one package or in one binary...drive down the center of the road.
As a result of all the hardware improvements it may be in your best interest to not try to tune the compiler output or hand assembler to any one chip in particular. On average the hardware improvements will compensate for the lack of compiler tuning and your same program hopefully just runs a little faster each generation. One of the chip vendors used to aim to make todays popular compiled binaries run faster tomorrow, the other vendor improved the internals such that if you recompiled todays source for the new internals you could run faster tomorrow. Those activities between vendors has not necessarily continued, each generation runs todays binaries slower, but tomorrows recompiled source the same speed or slower. It will run tomorrows re-written programs faster, sometimes with the same compiler sometimes you need tomorrows compiler. Isnt this fun!
So how do we know a particular compiled or hand assembled program is as fast as it possibly can be? We dont, in fact for x86 you can guarantee it isnt, run it on one chip in the family and it is slow, run it on another it may be blazing fast. x86 or not, other than very short programs or very deterministic programs like you would find on a microcontroller, you cannot definitely say this is the fastest possible solution. Caches for example are very hard if even possible to tune for, and the memory behind it, particularly on a pc, where the user can choose various sizes, speeds, ranks, banks, etc and adjust bios settings to change even more settings, you really cannot tell a compiler to tune for that. So even on the same computer same processor same compiled binary you have the ability to turn some of the knobs and make that program run a lot faster or a lot slower. Change processor families, change chipsets, motherboards, etc. And there is no possible way to tune for so many variables. The nature of the x86 pc business has become too chaotic.
Other chip families are not nearly as problematic. Some perhaps but not all. So these are not general statements, but specific to the x86 chip family. The x86 family is the exception not the rule. Probably the last assembler/instruction set you would want to bother learning.
There are tons of websites and books on the subject, cannot say one is better than the other. I learned from the original set of 8088/86 books from intel and then the 386 and 486 book, didnt look for Intel books after that (or any other boos). You will want an instruction set reference, and an assembler like nasm or gas (gnu assembler, part of binutils that comes with most gcc based compiler toolchains). As far as the C to/from assembler interface you can if nothing else figure that out by experimenting, write a small C program with a few small C functions, disassemble or compile to assembler, and look at what registers and/or how the stack is used to pass parameters between functions. Keep your functions simple and use only a few parameters and your assembler will likely work just fine. If not look at the assembler of the function calling your code and figure out where your parameters are. It is all well documented somewhere, and these days probably much better than old. In the early 8088/86 days you had tiny, small, medium, large and huge compiler models and the calling conventions could vary from one to the other. As well as one compiler to the next, watcom (formerly zortech and perhaps other names) was pass by register, borland and microsoft were passed on the stack and pretty close if not the same. Now with 32 and 64 bit flat memory space, and standards, you can use one model and not have to memorize all the nuances (just one set of nuances). Inline assembly is an option but varies from C compiler to C compiler, and getting it to work properly and effectively is more difficult than just writing assembler in its own file. gcc and perhaps other compilers will allow you to put the assembler file on the C compiler command line as if it were just another C file and it will figure out what you have given it and pass it to the assembler for you. That is if you dont want to call the assembler program yourself and put the object on the C compiler command line.
if nothing else disassemble a lot of simple functions, add a few parameters and return them, etc. Change compiler optimization settings and see how that changes the instructions used, often dramatically. Even if you cannot write assembler from scratch being able to read it is very valuable, both from a debugging and performance perspective.
Not all compilers for all processors are good. Gcc for example is a one size fits all, just like a sock or ball cap that one size doesnt really fit anyone well. Does pretty good for most of the targets but not really great. So it is quite possible to do better than the compiler with hand tuned assembler, but on the average for lots of code you are not going to win. That applies to most processors, which are more deterministic, not just the x86 family. It is not about fewer instructions, fewer instructions does not necessarily equate to faster, to outperform even an average compiler in the long run you have to understand the caches, fetch, decode, execution state machines, memory interfaces, memories themselves, etc. With compiler optimizations turned off it is very easy to produce faster code than the compiler, so you should just use the optimizer but also understand that that increases the risk of the compiler making a mistake. You need to know the tool very well, which goes back to disassebling often to understand how your C code and the compiler you use today interact with each other. No compiler is completely standards compliant, because the standards themselves are fuzzy, leaving some features of the language up to the discretion of the compiler (drive down the middle of the road and dont use those parts of the language).
Bottom line from the nature of your questions, I would recommend writing a bunch of small functions or programs with some small functions, compile to assembler or compile to an object and disassemble to see what the compiler does. Be sure to use different optimization settings on each program. Gain a working reading knowledge of the instruction set (granted the asm output of the compiler or disassembler, has a lot of extra fluff that gets in the way of readability, you have to look past that, you need almost none of it if you want to write assembler). Give yourself 5-20 years of studying and experimenting before you can expect to outperform the compiler on a regular basis, if that is your goal. By then you will learn that, particularly with this chip family, it is a futile effort, you win a few but mostly lose...It would be to your benefit to compile (to assembler) the same code to other chip families like arm and mips, and get a general feel for what C code compiles well in general, and what C code doesnt compile well, and make your C programming better instead of trying to make the assembler better. Also try other compilers like llvm. Gcc has a lot of quirks that many think are the C language standards but are instead nuances or problems with the specific compiler. Being able to read and analyze the assembly output of the compilers and their options will provide this knowledge. So I recommend you work on a reading knowledge of the instruction set, without necessarily having to learn to write it from scratch.