The number of inputs available in a GPU varies massively. Modern PC graphics cards can do all sorts of fun things, but the graphics cards in mobile phones are a lot more primitive.
BindChannels seem to be a Unity way of coping with this issue.
When you think of what is actually happening in a pixel shader, you get a block of "stuff" in and output a colour
How the "stuff" is organised needs to be defined in the shader code, but the shader code is a completely separate entity than the code running on the CPU.
So how do you get the "stuff" organised so that the CPU can supply the GPU with something that matches what the GPU expects? After all if you pass in a colour and a texture coordinate in the wrong order, the pixel shader doesn't know you have screwed up, it just does what it's been told on the data you have supplied.
There are lots of ways of doing this, it is not standardised. XNA has one way of doing it, OpenGL another, Opengles another, etc.
Looks like Unity has another.
The first structure is saying "give the shader a vertex position and two sets of texture coordinates".
What that actually gets compiled to should be ignored while you are learning. There are many different possibilities depending on the shader compiler.
The second is saying "give the shader a vertex position, one texture coordinate, and one colour"
That's all it really means.