There's no way to to change the c standard library since its standard is dependent on two groups, ISO and ANSI, and they are very resistant to changes in general. This is also pretty reasonable however because that means if they added a bunch things that needed to be in the standard library, then all of the different hardware I was mentioning would also have to include it in their compilers as well. For many c applications this might be overkill since their are probably few microcontroller applications that need to run an html server or whatever library was added to the standard. If you write in c you will definitely have to use some external library at some point. Sometimes this is not too bad because the library might be ubiquitous. For instance writing a video game using directx isn't too bad because all windows users have directx. Also most operating systems have their own extensions to the c library that are standard. Unix operating systems have the posix header, and windows has the windows header. But of course when using any of these libraries you have to remember that it will never work on any other type of operating system without the aid of an emulator. There are cross-platform libraries also but you have to specifically be looking for them, and using them doesn't automatically make your code cross-platform. The biggest deal about using these libraries however is that unless the library is wildly popular you are probably going to have to include the library with your distribution, and with that also include some sort of installation capabilities for the user. In a problem that's a lot more serious when you ship your code with other people's code you have to agree to release it under their terms. Some of these libraries like qt, require you to release your code to the public (meaning that you essentially can't make money off of it) or pay them a large sum of money for the proprietary version. The more external libraries you have the higher the chance that these libraries will conflict, or at the least the more terms your code is subjected to.
The question about compilers coming with languages is one I used to get confused about as well. A language is just a specification. Its a specification for a text file, and a specification for the behavior of the of the interpretation of that text file. Essentially a language is just a set of rules that say "You must start with the word int followed by main followed by left parenthesis etc ... " So you can go on to notepad right now and if you followed the rules given by the ANSI/ISO people then you have written valid c. You don't need a compiler or anything. Anyone is free to write a compiler and because of that their are many. Once you have notepad and any one of these many compilers you can write c code, and then compile it into binary.
Python does not have a compiler. And you may be asking two questions, 1 "If the computers can only understand ones and zeroes then how can it possibly run code that isn't turned into ones and zeroes?", and 2 "why not have a compiler since they are so awesome?" To answer the first question imagine a video game where you type in commands, and if it is in a certain format, and it does what you tell it to. For instance you might type "print 'hello world'" and it prints the words "hello world" onto your screen. This is whats called an interpreter. Its a program that reads in commands and does what it is told. Because the computer now has to run the interpreter code, AND the the code that the interpreter is running, interpreted languages run slower than compiled languages. If all the language had to do was run print commands then interpreted languages would be negligibly slower than compiled ones. However interpreted languages don't exist just to be slower versions of compiled languages, they exist to exploit and abuse the interpreters. For instance in c you have a 32 bit integer. This means that the largest number you can hold is 2 ^ 32 - 1. This means that if you wanted to calculate 12 factorial, your c code would return the wrong answer because it simply can't hold a number that big. If you wanted to be really clever you could create a complicated data type that represents an infinite precision integer by allocating memory on the fly, but it would be difficult to implement a fast, efficient data type, and the syntax would not be as easy as x = factorial(12). What the interpreted languages say is "let the interpreter do all of this complicated work, the programmer should be able to program without all of the complications that have to do with the machine. " So in python you can totally say x = factorial(1000) and its all good (as long as you have enough ram!) In python you also pretty much allocate as much memory as you want and never have to worry about returning it. The interpreter runs a program called a garbage collector that does all the hard work for you. This garbage collector is probably the slowest part of the interpreter's job. But interpreted languages aren't just about making doable things easier, they are also used for writing code that is pretty much impossible to write in a compiled language. I'll mention these when I write up the advantages/disadvantages of python.
So probably the biggest question is "So really how slow is python?!", the answer is that it can be up to 1000 times slower than c. So why does this not worry me and the interpreted language fans around the world? Partially because computers are so fast anyway. A 1.5Ghz computer can add two numbers in c in about .7 nanoseconds. That means that in python its about 7 microseconds (this isn't really true, but lets say all 1 clock operations in python took 1000 clocks) Going from one ridiculously small number to another ridiculously small number doesn't upset me too much. The other reason has to do with the kinds of programs we write. I implore you to look at what's going on your desktop right now. There's probably a webbrowser up, thats doing nothing but waiting for you to move the scroll bar, which is a miniscule operation, and you have your music player who's processing is mostly handled by your sound card. If your look at how your computer is performing, you might see that firefox is taking a significant part of your memory, but that your cpu usage is well under 30%. This is a big secret about computers, that at almost all times it is running its really doing nothing. Now what I don't want to suggest is that this is reason enough to add a huge load to your computer because of python. What I do want to suggest is that if you write you programs correctly, their will be very little python code running in the first place. These are the kinds of applications that python is good for. Their are other applications that require a computer to be running at full juice, this is where you want to also have another language in your back pocket.
To further complicate your understanding I will bring you into the current and future state of programming. All of the major python interpreters don't immediately read text and run it. Instead they compile their code to an imaginary machine. This is called a Virtual Machine or a VM. In python this "bytecode" is the actual code that can be interpreted. This is also what java, c#, and all of the .NET languages do, including c/c++, except they go one step forward. Instead of running the code immediately they actually compile some parts of the code, as its running! So the code becomes about as fast as compiled code. This is called Just in time compiling (JIT). Jython, and IronPython also do this automatically. The defacto python standard CPython also has an extension module called psyco that does this by adding a whopping two lines of code to your file. As mentioned before some parts of python can't really be compiled effectively so the gains aren't really as great as the other JIT languages, but it can easily make your code run 10 times faster.