I define knowledge: A)Justified B)True C)You believe it.
C is fairly redundant- of course you believe anything you know. If you "knew" it but didn't believe it, you wouldn't call it knowledge. A is absolutely true, assuming B is true. But B is the question. We are, here, discussing if truth is knowable. If the ability to "know" anything is in question, then truth, itself, is in question. So, if we are discussing if Truth is knowable, then defining knowledge as 'true' is circular, at best.
I prefer this, more straight forward (to me, anyway) definition- The mental recollection and understanding of a fact.
Justification is the lynchpin. Why did you believe something to be true? And if you raise that bar high enough, we really don't know very much.
Certainly you are right. If you are talking about 100% certitude, there are a great number of things we do not "know." However, we can "know" things with a great degree of certainty. For instance, to take your example. While I would not, necessarilly, know as soon as I left store that my car had gone for a ride, there would likely be clues to tell me. My odometer changing, for instance. The unlocked door, for another. So, if I get out to my car and it is in the same spot where I left it, the odometer hasn't changed, and the doors are still locked, I can say with close to 100% certitude that my car had, indeed, been sitting and waiting for me in the parking lot the entire time.
So whether or not my car has moved is still reasonably knowable; I just have to interpret the available evidence.
How do we know that we didn't learn our language in relation to how others describe things? I heard someone say the Eiffel tower is brown, but the color I see is what he would call green. we never can share our conciousness, so we never know that the words we're using are different.
The first is true. If the second were, however, we could never communicate. We are not, here, talking about the language we use. Obviously there are a very large number of human languages, and each of them will describe the same concept differently. The question, then, is if two people- speaking the same language- will mean the same concept with the same words. If they do, they can communicate, if they don't they can't. If I say, "That table is sturdy," but you understand "That brisket I ate was wonderful!" we haven't communicated. Even if we thought we were communicating, we would not be.
As a (somewhat) practical (if contrived) example, consider the following. You're having a discussion with your boss, and he gives you several "To Dos." He says, "Email the maintenance staff about your broken chair." You hear, "Go make me a sammich." Finding the order odd, but not wanting to get fired, you say, "Yes sir." He hears "I'll send that email immediately." Do you think he'll be rather surprised when you drop a half-congealed Monte Cristo on his desk?
That's what actually meaning different concepts by using the same words would actually mean.
If we can communicate concepts like "Email maintenance," then we can be reasonably sure we are correctly communicating concepts like "The sky is blue."
The trick is in not taking our 'knowledge' so far that we think all our observations are accurate knowledge.
This is the one I have the most problem with, however. Of course our observations are accurate knowledge. What they might not be is comprehensive knowledge- and that's something different. If observe a bird flying, then it is accurate knowledge that that bird, at that time, was flying. What I cannot do, from that one observation, is make the comprehensive statement that all birds fly. I would only know it about that one bird.
The trick here is to use one of the First Principles of logic (and here my lack of formal training may get the best of me, so please bear with me): the Principle of Uniformity. It basically says that when we see something repeated in nature, we can be pretty sure that it's going to be a regular occurance. Now, that's not completely true- which is why the Principle of Uniformity is only one of the first principals (and not really the most important), the others are combined with it to get a more full picture of knowledge. However, after observing, say, 100 times that you can pick up a rock (any rock), let go of it, and it will fall to the ground (or whatever surface is beneath it) you can, based on the Principal of Uniformity, infer that rocks- at least- will not stay in the air on their own. When you've done the same thing with enough objects, you can then deduce that this is generally true of all objects.
When you find an object (say a helium baloon) that does not fit your knowledge, that does not indicate that your knowledge is wrong, simply incomplete. And part of logic and philosophy is taking these facts that seem contrary to the general rules (what we might call 'edge cases') and finding out why they don't behave as expected. With our example, that might lead to the concept of relative density, gravity, and so forth.
To link this, finally, to my main point. If your point is that we cannot know Truth 100%, you are largely right. At some point, we have to admit we cannot have perfect knowledge, and accept that there is some amount of faith required for any belief we hold. The question, then, becomes how much evidence we have for a given proposition. If we have much evidence for something, it doesn't take much faith to believe it. If we don't have very much at all- or we have contraindicating evidence- then we must have a lot of faith to believe it.