Critical Thinking and Logic: A Primer for Beginners

Critical Thinking and Logic: A Primer for Beginners

NOTE ON PROGRESS:
This is a living project. Sections are being added and expanded over time. Right now, the focus begins with Logical Fallacies. Other sections will be filled in gradually, so check back for updates.

Introduction

This page is a quick reference guide for laypeople who never had formal training in logic but want to learn more about the basic principles of reasoning and debate.

It is not intended to be authoritative or exhaustive. Instead, the goal is to make logic and the rules of critical thinking more accessible, practical, and clear for anyone who’s interested.

Whether you’re trying to spot sloppy reasoning in an online debate, sharpen your own arguments, or just understand the difference between a solid point and a rhetorical trick, this page is here as a starting point and ongoing resource.

Table of Contents


I. Core Concepts of Argumentation (coming soon)


II. Argument Structures (coming soon)


III. Logical Fallacies

Logical fallacies are errors in reasoning that undermine the strength of an argument. Some are structural (formal fallacies), while others misuse language, assumptions, or relevance (informal fallacies).

Each entry includes:

  • Definition
  • Examples

Logical Fallacies


A. Formal Fallacies

Affirming the Consequent

Definition: Mistaking a true result as proof of the cause.

Form: If P → Q. Q is true, therefore P must be true.

Key terms:

  • Antecedent: the first part of an if–then statement, the condition. Example: If it rains…
  • Consequent: the second part, the result that follows if the condition is true. Example: …then the streets will be wet.

Non-fallacious example:

If it rains (antecedent), then the streets will be wet (consequent).
It rains (antecedent affirmed).
Therefore, the streets are wet (consequent affirmed).

This is valid reasoning. When the antecedent is affirmed, the consequent follows as expected. The structure is:

If A, then B.
A.
Therefore, B.

The fallacy (affirming the consequent):

If it rains (A), then the streets will be wet (B).
The streets are wet (B).
Therefore, it rained (A).

The mistake is reversing the logic. While rain causes wet streets, wet streets can come from other causes (sprinklers, street cleaning, a water main break). The consequent (B) does not guarantee the antecedent (A). Symbolically, this is:

If A → B.
B.
Therefore, A. (fallacious)

Denying the Antecedent

Definition: Assuming that if the first condition is false, the result must also be false.

Form: If P → Q. Not P, therefore not Q.

Key terms:

  • Antecedent: the “if” part of the conditional.
  • Consequent: the “then” part of the conditional.

Non-fallacious example:

If I study hard (antecedent), then I will pass the test (consequent).
I study hard (antecedent affirmed).
Therefore, I will pass the test (consequent affirmed).

This is valid reasoning. Affirming the antecedent allows us to affirm the consequent. The structure is:

If A, then B.
A.
Therefore, B.

The fallacy (denying the antecedent):

If I study hard (A), then I will pass the test (B).
I did not study hard (not-A).
Therefore, I will not pass the test (not-B).

This is invalid because there may be other ways to pass (prior knowledge, luck, an easy exam). The absence of the antecedent does not guarantee the absence of the consequent. Symbolically:

If A → B.
Not A.
Therefore, not B. (fallacious)

Another example:

If I turn on the heater (A), then the room will warm up (B).
I did not turn on the heater (not-A).
Therefore, the room will not warm up (not-B).

Undistributed Middle

Definition: Assuming that because two things share a property, they are the same.

Form: All A are C. All B are C. Therefore A = B.

Key terms:

In categorical logic, the middle term is the category that appears in both premises but not in the conclusion. To “distribute” a term means to apply it to all members of a group. If the middle term is undistributed, the conclusion cannot follow.

Non-fallacious example:

All poodles are dogs (middle term = dogs).
All dogs are animals.
Therefore, all poodles are animals.

Here the middle term “dogs” is distributed: it applies to the whole class of dogs, and this links poodles properly to animals. The reasoning is valid.

The fallacy (undistributed middle):

All cats are mammals (middle term = mammals).
All dogs are mammals (middle term = mammals).
Therefore, all cats are dogs.

This is invalid because the middle term “mammals” is not distributed. Cats and dogs both belong to the larger class of mammals, but that does not mean they are identical to each other.

Another example:

All roses are flowers.
All tulips are flowers.
Therefore, all roses are tulips.

Again, the middle term “flowers” is undistributed. Belonging to the same category does not make two subgroups identical.

B. Informal Fallacies

1. Fallacies of Relevance

Ad Hominem

Also known as: personal attack fallacy, argument against the person.

In everyday use, “ad hominem” is often taken to mean simply insulting someone. But in logic, it is more precise: it’s when an attack on the person is substituted for an argument against their claim. Saying “Your argument is invalid because you’re an idiot” is an ad hominem fallacy, because the insult is standing in for reasoning. By contrast, there is nothing logically fallacious about criticizing someone and also engaging with their argument. For example, “You’re an idiot, and your claim is wrong because the following evidence contradicts you...” is not a fallacy: the critique of behavior and the refutation are separate.

More to the point:

  • Fallacy: Your argument is invalid because an idiot
  • Not a fallacy: Your argument is invalid and you are an idiot

It’s also worth noting that not all references to a person are ad hominem. For instance, questioning someone’s credibility can be relevant in specific contexts (“We should doubt this claim because the source has lied repeatedly on this topic”). What makes something an ad hominem fallacy is when the attack replaces reasoning, not when a personal detail is contextually relevant.

Appeal to Authority

Also known as: argument from authority, argumentum ad verecundiam.

Appealing to authority isn’t always fallacious—after all, experts often know what they’re talking about. But it becomes a fallacy when the authority cited is:

  • Not a real expert in the relevant field
  • Quoted without context or without consensus
  • Used to shut down further argument or questioning

For example: “Einstein believed in God, so God must exist.” Even if that claim about Einstein were accurate (it’s complicated), he was a physicist—not a theologian or philosopher—and he’s not infallible. His views on religion have no binding authority on anyone else.

Another version of this fallacy is: “You’re not an expert, so your argument is invalid.” This is an inverted version of appealing to authority—dismissing a claim not on its merits, but on the credentials of the person making it.

Appeal to Consequences

Definition: Arguing that a belief must be true or false based on whether the outcome of believing it is desirable or undesirable.

Examples:

  • “Atheism can’t be true because it would mean life has no meaning.”
  • “God must exist, because otherwise there would be no moral compass.”

This fallacy confuses the emotional impact of a belief with its truth value. Whether an idea is comforting or horrifying doesn’t determine its accuracy. Reality isn’t obligated to care how we feel about it.

Appeal to Emotion

Also known as: argument from emotion, playing on emotions.

This fallacy tries to win an argument not with logic, but by tugging at the listener’s feelings. Guilt, fear, pity, anger, or flattery can all be used to bypass rational analysis. This doesn’t mean emotion is irrelevant—only that it cannot substitute for actual evidence.

Fallacy: “If you don’t believe in God, your grandmother will be heartbroken. Do you want to hurt her like that?”

Not a fallacy: “We should act because the evidence shows this causes real human suffering.”

Tu Quoque (You Too)

Definition: Dismissing criticism by pointing out the critic’s flaws.

Examples:

  • “You say I shouldn’t smoke, but you used to smoke.”
  • “Why should I listen to your advice on saving money—you’re in debt?”
  • “You criticize my grammar, but you misspelled something last week.”

Also known as: appeal to hypocrisy, “whataboutism.”

This fallacy is often misunderstood. In casual speech, “hypocrisy” is treated as synonymous with “tu quoque.” In logic, tu quoque occurs when you dismiss someone’s argument by pointing out their inconsistency. “You say smoking is unhealthy, but you smoke — therefore your argument is false” is tu quoque: the truth of the claim does not depend on whether the speaker lives up to it.

It is not fallacious, however, to call out hypocrisy in other contexts. For example: “You claim to care about the environment, but you dump waste in rivers” is not an invalidation of their argument but an exposure of inconsistency in behavior. The fallacy is using hypocrisy as a substitute for a rebuttal.

Shifting the Burden of Proof

Definition: Expecting others to disprove a claim, rather than proving it yourself.

Examples:

  • “You can’t prove God doesn’t exist, so He must.”
  • “No one’s disproven Bigfoot, so it’s real.”
  • “Until you show me ghosts aren’t real, I’ll keep believing in them.”

This fallacy flips the rules of rational argument. The burden of proof always falls on the person making the claim. If someone asserts something—whether it’s about gods, ghosts, or goblins—they’re responsible for supporting it.

Saying “prove me wrong” is not the same as giving evidence. And shifting the burden of proof is often a way to dodge responsibility for a weak argument. It’s not up to your audience to disprove you; it’s up to you to prove your claim.

Argument from Ignorance

Definition: Arguing that something is true because it hasn’t been proven false, or false because it hasn’t been proven true.

Examples:

  • “No one has proven aliens don’t exist, so they must be real.”
  • “There’s no evidence the defendant is innocent, so he must be guilty.”

Also known as: appeal to ignorance, argumentum ad ignorantiam.

Argument from ignorance doesn’t mean the speaker is “ignorant.” It refers to the claim that a lack of evidence is itself evidence. “We haven’t proven X false, therefore X is true” is not valid reasoning—absence of proof is not proof of the opposite.

This fallacy is especially common in supernatural claims: ghosts, gods, and monsters. “You can’t prove they don’t exist, so I win” is not a logical defense—it’s an attempt to flip the burden of proof, which always lies on the person making the claim.

Argument from Incredulity

Definition: Dismissing a claim because it seems unbelievable or hard to imagine.

Examples:

  • “I can’t believe humans evolved from apes, so it must not be true.”
  • “It’s just too crazy to think the universe came from nothing.”

Also known as: appeal to common sense, personal incredulity.

This fallacy relies on the speaker’s lack of imagination or understanding. But our inability to conceive of something doesn’t make it false. Reality isn’t limited to what seems intuitive—quantum physics, for example, violates all sorts of “common sense” assumptions, yet it’s backed by rigorous evidence.

Appeal to Consequences

Definition: Arguing that something must be true or false based on whether the outcome of believing it is desirable or undesirable.

Examples:

  • “If there’s no God, life is meaningless. Therefore, God must exist.”
  • “If evolution is true, then humans are just animals. That’s too depressing—so it can’t be right.”

This fallacy confuses emotional comfort with logical evidence. Truth is not determined by how a belief makes us feel, but by whether it’s supported by reason and facts.

Appeal to Popularity (Bandwagon)

Definition: Arguing that a claim must be true because many people believe it.

Examples:

  • “Everyone believes in God, so it must be true.”
  • “Most people think ghosts are real—you can’t just dismiss that.”

This fallacy relies on social proof rather than evidence. A widespread belief is not automatically a correct one. Millions of people have believed false things throughout history.

Red Herring

Definition: Introducing an irrelevant topic to distract.

Examples:

  • “We shouldn’t worry about pollution when there are so many people unemployed.”
  • “Why discuss healthcare costs when aliens might exist?”
  • “Forget his corruption charges—look how nice his family is.”

Also known as: irrelevant thesis, distraction fallacy.

In common use, people think “red herring” just means changing the subject. In logic, it’s more subtle: it’s introducing an irrelevant issue to distract from the argument at hand. For example:

“We should regulate factory pollution.”
“But what about unemployment rates?”

The second statement introduces a different concern to divert attention.

Importantly, not every tangent is a red herring. Sometimes related issues legitimately bear on the discussion (“Pollution regulation may affect jobs, so let’s weigh both effects”). It becomes a fallacy when the diversion prevents engaging with the actual claim.

Appeal to Tradition

Definition: Arguing something is right because it has “always been done.”

Examples:

  • “We’ve always had this policy, so we should keep it.”
  • “Marriage has always been between a man and a woman.”

Appeal to Novelty

Definition: Arguing something is right because it’s new or innovative.

Examples:

  • “This new app must be better—it’s the latest.”
  • “Our method is modern, so it must be superior.”

Appeal to Nature

Definition: Assuming something natural is automatically good (or unnatural is bad).

Examples:

  • “It’s herbal, so it must be healthy.”
  • “This remedy is natural, therefore safe.”

2. Fallacies of Ambiguity

Straw Man

Definition: Misrepresenting an argument to attack the weaker version.

Examples:

  • “You want more environmental rules? So you want to shut down all factories.”
  • “He thinks we should spend less on the military—so he wants us defenseless.”
  • “You don’t like fast food? So you think no one should ever eat it.”

Also known as: misrepresentation fallacy, caricature fallacy.

A straw man is not just disagreeing poorly. It specifically involves distorting or oversimplifying an opponent’s argument, then refuting that weaker version instead of the original claim. For example:
Person A: We should have some regulations on advertising to children.
Person B: So you want to ban free speech for businesses?

The distortion creates a weaker “straw” version of the argument, easier to knock down. The nuance: not every misunderstanding is a straw man. It becomes fallacious when the misrepresentation is deliberate or negligent enough to avoid addressing the actual claim. Clarifying questions or genuine misinterpretations are not fallacies — dishonestly changing the target is.

Equivocation

Definition: Using a word with multiple meanings to mislead.

Examples:

  • “A feather is light. What is light cannot be dark. Therefore, a feather cannot be dark.”
  • “The law can’t be broken. Breaking the law is illegal. Therefore, it’s impossible to break the law.”
  • “Fine for parking here” means both “it’s okay” and “you’ll be fined.”

Also known as: ambiguity fallacy, semantic shift.

Equivocation occurs when a word is used in two different senses within the same argument, blurring meaning. For example:
“Only man is rational. No woman is a man. Therefore, no woman is rational.”

Here “man” shifts between “human being” and “male.” The fallacy is not just using ambiguous words — we all do that — but using the ambiguity as the basis for a conclusion. In casual conversation, people often confuse equivocation with just “being unclear,” but in logic it is specifically when the shift in meaning carries the weight of the reasoning.

Amphiboly (Grammatical Ambiguity)

Definition: Exploiting ambiguous phrasing or syntax.

Examples:

  • “Flying planes can be dangerous.”
  • “Students complained to the teacher about the noise in the hall.”
  • “I saw the man with the telescope.”

Also known as: syntactic ambiguity.

Unlike equivocation (word-based), amphiboly arises from ambiguous sentence structure. For example:
“Save soap and waste paper.”
Does it mean conserve both, or save soap by wasting paper?

Another famous case: “One morning I shot an elephant in my pajamas.” (Who’s wearing the pajamas?) Amphiboly isn’t just clumsy writing — it’s fallacious if the ambiguous syntax is exploited to draw a conclusion.

3. Fallacies of Presumption

Begging the Question (Circular Reasoning)

Definition: Assuming what you’re trying to prove.

Examples:

  • “God exists because the Bible says so, and the Bible is true because God wrote it.”
  • “Democracy is the best system because nothing is better.”
  • “I’m trustworthy because I said I am.”

Also known as: petitio principii, assuming the conclusion.

In casual speech, “begs the question” is misused to mean “raises the question.” In logic, it refers to assuming the truth of what you are trying to prove. Example:
“God exists because the Bible says so, and the Bible is true because it is the word of God.”

The reasoning circles back on itself. The nuance: not all reasoning loops are fallacious if they are explanatory rather than justificatory. But when the premise and conclusion smuggle in the same claim, the circle invalidates the proof.

False Dilemma (False Dichotomy)

Definition: Presenting two choices as the only options.

Examples:

  • “You’re either with us or against us.”
  • “Either buy this expensive product or live miserably.”
  • “We must either cut education funding or go broke.”

Also known as: either-or fallacy, black-or-white fallacy.

This fallacy occurs when only two choices are presented as if they were the only possibilities. “You’re either with us or against us.” In reality, most issues have a spectrum of options.

The nuance: some dilemmas are genuine. For instance, “Either it is raining or it is not raining” is a true binary. The fallacy arises when alternatives are artificially limited. Recognizing false dilemmas requires asking: “Are there other viable options being ignored?”

Loaded Question

Definition: A question with an embedded assumption.

Examples:

  • “Have you stopped cheating on tests?”
  • “Why are you always so lazy?”
  • “When did you decide to waste your life?”

Also known as: complex question fallacy, trick question.

A loaded question embeds an assumption that traps the respondent. “Have you stopped cheating on your taxes?” presumes guilt regardless of the answer.

The nuance: not every leading question is loaded. A leading question nudges toward an answer (“Wouldn’t you agree that…?”), but a loaded question rigs the premise itself. To avoid the trap, the respondent must challenge the assumption, not the yes/no.

No True Scotsman

Definition: Redefining a group to exclude counterexamples and protect a universal claim.

Examples:

  • “No Scotsman puts sugar on his porridge.” — “But my uncle does.” — “Then he isn’t a true Scotsman.”
  • “All Christians agree with me.” — “This Christian doesn’t.” — “Then they aren’t real Christians.”

Anecdotal Evidence

Definition: Using personal stories as proof of a general claim.

Examples:

  • “My uncle never wore a seatbelt and he’s fine. Seatbelts aren’t needed.”
  • “I tried that diet and it worked, so it works for everyone.”

Hasty Generalization

Definition: Drawing a general rule from an inadequate sample.

Examples:

  • “My grandfather smoked and lived to 97. Smoking can’t be bad.”
  • “Two politicians were corrupt, so all are crooks.”

Gambler’s Fallacy

Definition: Believing past random events affect future random outcomes.

Examples:

  • “It landed red five times, so black must be next.”
  • “Three heads in a row, so tails is due.”

Moving the Goalposts

Definition: Changing the criteria for proof after those criteria have already been met, so that the opponent can never win.

Examples:

  • “Show me one study that proves vaccines are safe.” — [Study provided] — “Well, show me ten studies.”
  • “If evolution is true, show me a transitional fossil.” — [Example given] — “That doesn’t count, show me another.”

4. Causal Fallacies

Post Hoc Ergo Propter Hoc (False Cause)

Latin: After this, therefore because of this

Definition: Assuming that because one event follows another, it was caused by it.

Examples:

  • “The rooster crowed, then the sun rose. The rooster made the sun rise.”
  • “I wore my lucky socks and we won the game. The socks caused it.”
  • “Crime dropped after we hired more police officers, so hiring caused the drop.”

Also known as: Post hoc, questionable cause.

The fallacy assumes that because one event followed another, the first caused the second. “I wore my lucky socks, and then we won the game — the socks caused the win.”

The nuance: temporal sequence alone doesn’t prove causation. But not every post hoc claim is fallacious — if backed by controlled observation (e.g., medical studies of side effects), temporal order can suggest causation. The fallacy arises when sequence is treated as sufficient proof.

Slippery Slope

Definition: Claiming that one step will inevitably lead to a chain of disasters.

Examples:

  • “If we allow kids to play video games, soon they’ll stop studying, then drop out, then end up homeless.”
  • “If you let people smoke in public, next they’ll be using drugs openly.”
  • “If we allow one immigration exception, the borders will collapse.”

Also known as: domino fallacy, continuum fallacy.

A slippery slope claim argues that a small first step will inevitably lead to extreme consequences. “If we allow students to redo tests, soon they’ll demand A’s for no work at all.”

The nuance: not all slope arguments are fallacious. Sometimes consequences do logically follow (e.g., unchecked erosion of rights). The fallacy is when the progression is asserted as inevitable without demonstrating the causal chain.


Correlation vs. Causation

Definition: Mistaking correlation for proof of causation.

Examples:

  • “Ice cream sales rise in summer. So ice cream causes heat.”
  • “Cities with more churches have more crime. Churches cause crime.”
  • “People who carry lighters get lung cancer. Lighters cause cancer.”

Also known as: cum hoc ergo propter hoc (“with this, therefore because of this”), false correlation.

This fallacy mistakes correlation for proof of causation. “Ice cream sales rise in summer, so ice cream causes sunburns.” Both are linked by a third factor: hot weather.

The nuance: correlation can be a starting point for investigation (epidemiology often begins this way). The fallacy is treating correlation as if it were the final proof of causation.


IV. Cognitive Biases and Reasoning Pitfalls

Even when we avoid official fallacies, our thinking is far from immune to error. Two broad categories account for much of our trouble: cognitive biases, which are the mental shortcuts baked into our psychology, and pitfalls of reasoning, which are broader habits of sloppy thought or misuse of evidence. Both undermine good judgment.

A. Cognitive Biases

Confirmation Bias

We tend to notice, remember, and give extra weight to information that supports what we already believe, while ignoring or dismissing evidence that contradicts us.

Example: A person convinced that vaccines are dangerous scours the internet until they find one blog post backing them up, while ignoring dozens of large-scale peer-reviewed studies showing safety.

Availability Heuristic

We judge the likelihood of something by how easily examples come to mind. Dramatic or recent events skew our sense of probability.

Example: After seeing news reports about plane crashes, a traveler insists flying is riskier than driving—even though statistically, cars are far more dangerous.

Anchoring Bias

Our judgments are heavily influenced by the first piece of information we receive—even if it’s irrelevant.

Example: If someone suggests that a painting might be worth $10,000, later estimates tend to cluster near that anchor, even if the artwork is actually worthless.

Hindsight Bias

Once an event has occurred, we convince ourselves that the outcome was obvious all along.

Example: After a stock market crash, commentators say it was inevitable, even though almost no one predicted the timing or cause beforehand.

Dunning–Kruger Effect

People with limited knowledge or skill in a domain often overestimate their competence, while experts may underestimate theirs.

Example: A novice chess player believes he’s ready to beat tournament veterans after winning a few casual games online. Meanwhile, a grandmaster worries about weaknesses in her openings.

Survivorship Bias

We focus on the successes that remain visible while ignoring the many failures that disappear from view.

Example: Motivational speakers often point to billionaires who dropped out of college, ignoring the countless dropouts who never made it. The visible “survivors” skew the lesson.

Framing Effect

The way information is presented affects how we interpret it, even if the underlying facts are identical.

Example: A doctor says a surgery has a “90% survival rate” instead of a “10% mortality rate.” Patients respond far more positively to the first framing, despite it meaning the same thing.

Self-Serving Bias

We attribute successes to our own skill but blame failures on outside forces.

Example: A student aces a test and credits their intelligence. When they fail the next one, they blame the “unfair” questions.

Fundamental Attribution Error

We tend to assume people’s actions are due to their character, while excusing our own behavior as situational.

Example: If someone cuts us off in traffic, we think “what a jerk.” But when we do the same, we tell ourselves it’s because we were late for an important meeting.

Sunk Cost Fallacy

We continue investing time, money, or effort into something simply because we’ve already invested in it, even when quitting would be wiser.

Example: Someone keeps repairing an old car that constantly breaks down, reasoning, “I’ve already spent so much on it,” instead of admitting it’s time to let it go.

B. Pitfalls of Reasoning

These aren’t technically biases or official fallacies. Think of them more as matters of intellectual hygiene: habits of poor reasoning that may not have official names in psychology textbooks but still muddy our thinking and weaken arguments. They’re avoidable, but only if we’re mindful of them.

Overgeneralization

We draw sweeping conclusions from too little evidence. A handful of cases gets treated as a universal rule.

Example: Meeting one rude tourist from France and concluding, “the French are all rude.” The error is projecting a small anecdote onto an entire population.

Cherry-Picking Evidence

Instead of weighing all the evidence, we selectively highlight what supports our point and ignore the rest.

Example: A speaker arguing for a fad diet cites one small favorable study while neglecting the ten larger studies that show no effect.

Correlation Without Context

We notice two things happening together and assume a causal link, without considering other factors.

Example: Ice cream sales and drowning deaths both rise in summer. The careless conclusion is that ice cream causes drownings, rather than both being driven by hot weather.

Failure to Consider Alternatives

We assume our explanation is the only one possible and dismiss competing accounts.

Example: Someone dreams of an earthquake and later experiences one. They conclude the dream predicted the event, overlooking explanations like coincidence, selective memory, or subconscious cues.

Overreliance on Intuition

Gut feelings can be useful in everyday life, but relying on them in complex or technical matters leads to error.

Example: A hiring manager picks candidates based on “vibes” rather than qualifications, structured interviews, or work samples, leading to poor hires.

Neglect of Base Rates

We focus on a striking case while ignoring the background statistical reality.

Example: A person receives a positive result on a test for a rare disease and assumes they must have it, without realizing that false positives vastly outnumber true ones for such conditions.

False Precision

We treat rough numbers or shaky estimates as if they were exact. This gives a false sense of certainty.

Example: Claiming that “82% of people prefer Brand A” based on a tiny, unrepresentative sample of twenty people, as if the number were exact and authoritative.

V. Debate and Rhetorical Techniques (coming soon)


VI. Tools for Clear Thinking (coming soon)


VII. Reference Appendices (coming soon)

Timar Ross

Timar Ross

Amateur historian writing skeptical, source-driven analyses of biblical prophecy and ancient history. MLA citations; NRSVUE quotes; context over proof-text.
Medellin, Colombia