Why Good Judgment Still Outwits AI: Lessons For Leaders To Avoid Work With No Soul

0
(0)

Are you ready to phone it in and let AI do your job?

Be careful. AI generates output that has no soul.

I’ve seen an alarming trend lately with how people have chosen to adopt AI into their workflow.

Here’s a scenario typical of what I am watching unfold:

Colleague: “I’m ready to review my findings and insights from the interviews I performed with you.”

Me: “Great. Let’s take a look.”

(We review the write-up.)

Me: “Um… I don’t remember these findings. The feeling of the participants was more nuanced. Remember how passionate they were about the new direction? And how they’d failed before and how it made them feel? That doesn’t come out in your analysis.”

Colleague: “Well, to be honest, I fed the interview transcripts into ChatGPT. With all my other commitments, I didn’t have time to write this up. So, I had AI do it for me.”

Me: “Oh.” (Gulp)

What would have happened if I had not caught this gross oversight? The findings would have been…robotic.

This is one example of many, and I find it frustrating.

Some people know how to use AI as a competitive advantage. Others, looking for a quick fix, are, in error, handing the reins over to AI. But this is a failing strategy. Why? Because human judgment is absent.

Here’s the scary part.

When I stumble upon these situations, I know I’m only catching a few instances. More and more, folks are letting AI do their work outright. The result? Corporations are acting on soulless, AI-driven decisions every day.

And that’s frightening.

Leaders today need to be aware of bad and good use cases for AI and educate their employees on them. You want amplification from AI to be on the good, not the bad.

When your employees turn their minds off, AI becomes a liability, not an asset.

Here are some surprising use cases of AI that fail spectacularly.

AI has extended the boundary of what’s possible.

As with any breakthrough technology, though, some uses don’t work as well as others. It’s easy to get lost in the hype and see everything as a nail and AI as the hammer. I know because I’m guilty of this myself.

By becoming educated on the limitations of AI, you can sift fiction from reality. This knowledge helps you be responsible in your use of it.

These are the main misconceptions I’ve run into that you and your employees should be aware of.

Misconception 1: AI is all-knowing.

The Large Language Models of today know patterns but have no knowledge.

Is a pattern good? Is it bad? An LLM couldn’t care less. Frequent patterns rise to the top, good and bad alike.

Try it for yourself. Have an LLM respond to a prompt you already know how to answer. You will find an element of truth in what it spits out. But more than likely, you will see the cracks in the response. It lacks depth and nuance.

Something human is missing.

So, you can’t give blind trust to what you get from today’s AI. You need to use your own experience and judgment to assess the answers AI generates for you.

AI can’t assess the merit of what it tells you. You have to do that.

If you know this, you won’t fall into the next trap.

Misconception 2: AI can do my job for me.

Autonomous AI has the appeal of the ultimate form of delegation: all the work in our jobs we don’t feel like doing.

But sitting around in our pajamas watching Netflix while AI does all our work for us isn’t reality. Actually, I hope it never is. That’s depressing.

As we discussed in misconception 1, AI can’t reason like a human can.

I’ve found this to be most true when it comes to communication.

Now, most are awestruck at first by the natural language capability of these LLMs. On the surface, they seem excellent at communication.

But AI’s ability to understand stops with the written word. It misses out on other aspects of communication like body language, tone of voice, and emotion. Psychologist Albert Mehrabian found these contribute to 70-93% of communication. Words only account for 7%^1.

The written word is a poor form of communication, and this holds true for AI.

You may be thinking, “What about ChatGPT 4o Advanced Voice Mode?”

Good point. This multi-modal LLM can give the impression of emotional connection through voice conversation. But don’t let it fool you. Its capability is nascent, immature, and unreliable.

AI isn’t a human, even if it sounds like one.

Don’t farm out your job to a robot that only gets 7% of the communication pie.

Understanding the human condition requires humans.

Misconception 3: AI can give me capabilities I don’t currently have.

AI opens a door to easy knowledge accessibility for us all.

But walking through the door and relying on AI to fill a skill gap is dangerous. Your lack of experience is the problem. You can’t judge the merit of the guidance AI gives you.

Let’s take an example.

A common use case is to ask AI to provide a set of interview questions for you. If you have never performed an interview, you don’t know the types of questions to avoid.

  • Broad questions
  • Yes/No Questions
  • Leading questions
  • Multipart questions

These are all no-no’s for interviews. But, if an LLM spits out any questions like that, you won’t know the difference as a newbie. Then, you conduct the interview, make the interviewee uncomfortable, and get unusable answers.

Outstretching your capability puts too much faith in AI. Don’t do it. It’s not a winning move.

Just like AI can’t do your job for you, it can’t do a job you can’t perform yourself.

Misconception 4: AI can help me automate my crappy workflow.

Don’t automate a bad process.

Bad or excess processes accumulate over time. We tend to accept them as, “the way things are.”

It’s always better to fix a bad process instead of automating it. When you automate a bad process, you make it even less likely to change. It almost solidifies it as unchangeable. You are even less likely to question it if it is now effortless.

A bad process done faster gets you nowhere fast.

For instance, assume today your stakeholders generate ideas in a vacuum. They hand them off to your team to build, no questions allowed. In this case, you are their robots.

Then, your stakeholders decide to use AI to generate the ideas they force on your team. They have just automated a bad process. It would be better to involve your team in the ideation. Then, you could use AI to help you and your stakeholders brainstorm together.

Automation of a bad process does not make the process good..

Correct the process first.

Then, automate the time-consuming parts of it with AI.

But don’t trust even the parts you automate.

Human judgment must remain.

3 Use Cases I’ve Tried That Actually Make Sense For AI (Today)

I’m not claiming AI can’t speed up our workflow.

But how you use it matters. I’ve found three use cases over the past two years where AI becomes a force multiplier for me and my teams.

Let’s dive in.

Use 1: Idea Juicing

AI can help you avoid the blank page.

It can get the juices flowing. AI can give you the spark that leads to a new avenue of thought.

I use this all the time when I am brainstorming.

  • Article titles.
  • Email subject lines.
  • Novel solutions to a common problem.
  • Solution ideas for a problem statement.
  • Counterarguments against my position.
  • What other experts in the subject domain would do.

But I have one rule: I never use the AI ideas verbatim.

I use the ideas to feed my imagination. Most of the ideas from AI are usually good, not great. But they often send my mind thinking about a direction I had not considered.

This idea “juicing” is great for individuals or groups.

Give it a try next time you are brainstorming.

Use 2: Research Assistant

I hate research.

Some people love it, but not me. Making my way through dense scientific studies or vast volumes of data isn’t my cup of tea.

Yet, even if I loved it, I’m not that fast at it. This is where AI shines. It can sift through lengthy tombs of information. Within minutes, it makes sense of the data to answer your prompts.

The sheer speed of AI at doing research makes it compelling.

Here are some ways my team and I use it:

Open-ended Queries. Ask what studies exist on a topic. This is great when you don’t know what’s out there. The results can take you in unexpected directions.

Finding answers in existing research. Have it sift through data, studies, documents, or survey answers to return trends or topic hits. I love being able to ask questions in an unstructured way and get back relevant information. It’s a real time-saver.

Slicing and dicing data to find trends. AI is good at finding trends in a sea of information. Sure, we could do it ourselves, but it would take much longer. Have it notice patterns for you.

I’ve been using AI for research like this for over a year. It’s great.

Be sure to always ask for references and citations of any research it returns. Remember, you can’t remove human judgment. Treat the AI research results like you would treat that of a research intern. You still have to check its work.

Smart, fast research task completion—it’s a great AI use case.

Use 3: Gap Analysis

AI is great at helping you see and fill in the gaps in your work.

When I have a solution draft, I feed it into the AI and ask these types of prompts:

  • What did I miss?
  • What objections should I expect?
  • What would (some expert) think about this?
  • What is the difference between these two things?

I always get back results that help me fortify my solution.

Remember, solution generation from a blank page is not an AI strength. But editing something you have created is a different story. You can tell it to analyze it from different angles. This pressure testing has made my initial solutions stronger.

You can improve the thinking within your solutions by using AI.


That’s it.

Don’t consider AI as:

  1. All-knowing.
  2. A replacement for the work you do well.
  3. A substitute to do the work you don’t know how to do.
  4. A way to automate your bad process.

But do use it to assist you with ideas, research, and gap analysis.

Tying it all together:

  1. Use AI to help you come up with ideas.
  2. Use AI to help you confirm your ideas against research.
  3. Use AI to poke holes in your solution, so you can improve it.

I know that AI is advancing fast. I may look back on this article in a year, and see it as naive. But right now, it’s how I use AI.

We have to be responsible in our use of this emerging technology. Until it’s as good as us or better, it can’t replace us. That’s a scary proposition anyway. Let’s not hurry that along by replacing ourselves with it today.

With a bit of deliberate care, you can responsibly use AI to amplify human achievement.

Good luck out there, humans.


➡️ Sign up for weekly insights like this on getting back to the fundamentals that underpin unstoppable product teams. Join a community of like-minded professionals committed to achieving outcomes sooner.


References

  1. Silent Messages, Albert Mehrabian, 1971

How useful was this post?

Click on a star to rate it!

Average rating 0 / 5. Vote count: 0

No votes so far! Be the first to rate this post.

As you found this post useful...

Follow us on social media!

We are sorry that this post was not useful for you!

Let us improve this post!

Tell us how we can improve this post?

Leave a Reply

Your email address will not be published. Required fields are marked *