From Logic to Intuition: Gen AI Impact on Analytical Thinking

Thinking is one of the greatest things that we can do with our brains. With thought, we’re able to rationalize possibilities, create novel discoveries, and even ponder some of life’s greatest mysteries.

This complex organ consumes approximately 20% of the body’s energy resources which is necessary to support its complex functions, including information processing and maintaining cellular integrity. That’s a lot of energy consumption for an organ that accounts for about 2% of our body’s weight!

However, have we entered an era in which we’ve outsourced our brain power to an over-reliance on generative AI? Now don’t get me wrong, I leverage generative AI as much as the next productivity obsessed corporate worker and content creator. But it’s time we take a step back to truly consider whether we’re doing ourselves a disservice when it comes to validating generative AI’s output as truth. We’ve become awe-stricken by what generative AI technology can produce. Emails drafted in seconds. Reports written in mere moments. Slide decks both beautified and organized in such an aesthetically pleasing manner almost instantaneously. I’ll admit, it’s been truly remarkable to witness what we can do with generative AI.

But with most technology, not all that glitters is gold. Generative AI can make mistakes. And our over reliance on the tech is leading to place where we begin to shut off our analytical thinking and instead opt for intuitive thinking.

Intuitive thinking, also known as intuition, is a cognitive process characterized by the ability to understand or know something immediately without the need for conscious reasoning or analysis. It involves relying on instincts, gut feelings, or unconscious knowledge to make decisions or solve problems. Intuition often operates quickly and effortlessly, allowing individuals to reach conclusions or take action without being fully aware of the underlying reasoning. This is our brain on generative AI - or at least what it sometimes seem to be.

Intuitive thinking contrasts with analytical thinking, which involves deliberate reasoning, logical deduction, and systematic evaluation of information. While analytical thinking relies on conscious effort and rationality, intuitive thinking tends to be more spontaneous and subjective. We want to get things done now. We want them done fast. And generative AI fills that need. The companies and individuals behind these tools optimize so much for speed to deliver us the information we seek faster. The quicker the response, the faster we can move onto our next task.

But what if we took a moment to stop and think? What if we turned on our analytical brain for just a second to truly analyze and dissect the output of what’s been generated? And it’s not always necessarily a matter of factchecking, but instead considering whether the tool’s output is relevant and sound for your use case.

Generative AI generates output based on the information it’s been given. There’s input from the data in which it has been trained and then there’s input from the prompt in which its been provided. With too little relevant and complete input in prompting, generative AI may totally miss the mark on providing you with a response that would benefit you most. And with too little training data on the relevant subject matter, the generative AI may fail to provide a grounded response and opt for a hallucination.

In this race to gain strides in productivity, how often are we taking a moment to validate AI’s generated responses? Do we find ourselves leading more with intuitive thinking - trusting our gut and failing to think analytically? Would we even inherently know whether we’ve been given false information? A part of me feels that we’re leaning more towards this direction given that generative AI has become idolized as an all-knowing magical technology, capable of human intelligence.

But generative AI can make mistakes.

The reality is that generative AI is only as good as the data it’s been given. Data quality is paramount to training a model. Earlier, I mentioned the term ‘hallucination’ and I meant it in the context of generative AI. Similar to how humans hallucinate, generative AI has the ability to do so as well! Hallucination is a nonsensical or unexpected output from the AI.

Generative AI models are trained on large datasets, which may contain biases or noisy data. If the training data is incomplete, unrepresentative, or contains errors, the model may learn incorrect associations or generate misleading outputs. There’s also overfitting which occurs when a generative AI model become too specialized in the training data and fails to generalize well to unseen data. In such cases, the model may produce outputs that closely micmic the training examples but lack diversity or fail to capture the broader context accurately. And then there’s ambiguity in the input. Generative AI models often rely on contextual cues in the input data to generate appropriate outputs. If the input is ambiguous or lacks sufficient context (i.e. we’re the one’s at fault), the model may struggle to produce coherent outputs and may "hallucinate" by generating nonsensical or irrelevant content.

Most often, online generative AI tools provide warnings that the tool can make mistake and that it’s recommended to fact-check the generated response. I’ve had instances where I’ve caught some of the more well-known and popularized generative AI in the act of hallucination. And what’s been most interesting about those moments is that when you call out it’s hallucination, it readily admits that it was wrong. It makes me wonder why the AI would generate a false response if it knew that the response was inaccurate?

Not all hallucinations are blatantly obvious - especially if you’re dealing with new or unknown subject matter. We’ve become used to generative AI providing so many accurate responses that false responses become more challenging to detect. This is where analytical thinking becomes key.

I recommend we take a moment to slow down and truly analyze what these tools output before taking their response as law and moving onto the next task at hand. What I’ve found to be helpful is to follow up with a prompt that asks the tool to provide it’s sources for the information that’s been generated. That usually does the trick when either something sounds fishy or if I plan to leverage the output response as factual data for a document I’m drafting.

Whatever you do, don’t fall into the trap of throwing away the analytical part of your brain. Generative AI can be a very useful tool but I implore that you’re cognizant of it’s ability to make mistakes. Leading with intuition and trusting our gut has it’s purpose, however, let’s not trust it too much.

Next
Next

Fashion Forecast