Think Before It Tells You To

“Not your weights, not your brain.” - Andrej Karpathy

You have a school assignment due in 2 hours, and you’re incredibly unprepared for it. It’s late at night and you’ve already had a long day. You fear you won’t be able to complete it on time, and think to yourself:

I’m not even sure where to start… let me ask ChatGPT for some help.

Within a few short moments, you have answers. Maybe you can finish this in 2 hours.

There’s a lot to unpack about why situations like these are problematic, and not enough discussion about the consequences of LLMs.  In fact, a new AI startup has now made it even easier to find answers to any problem set or essay with a simple screenshot. There’s a lot of people turning to AI for solutions: every week, ChatGPT alone has 200 million active users. To provide one benchmark, if ChatGPT’s user base was solely in the US, then over half of the population would be using its platform. 

LLMs are extremely convenient. They’re specifically designed to answer any question, regardless of its truthfulness. This is much more efficient than researching a topic online, and especially pre-internet where you had to physically go to a library and flip through pages of multiple books. Any question you can think of now has an instant answer, and these models will only improve their capabilities over time.

But this convenience comes with costs. There’s several I could talk about, but I want to focus on two.

In a strange way, asking an LLM for an answer sometimes feels like copying someone else’s work. In reality, the output is from its weights and training data, but I’m using an LLM’s ‘digital brain’ instead of mine. This is potentially really dangerous. Reason and critical thinking skills are passed along to a machine. OpenAI’s newest model, o1, can reason through complex tasks and solve harder problems than previous models. OpenAI’s o1 is their most persuasive model, and now demonstrates human-level persuasion capabilities across a number of evaluations. There will be some tasks that we’ll want to give to the machine, but we’ll need to consider which tasks we don’t want to offload and why. In this intelligence age, we’ll have to rethink the role of education alongside AI, and ensure that we are highlighting our ability to interpret and work with it. To do this, we’ll likely require a new type of teacher, curriculum, and learning experience.

There’s another really important point I want to highlight in the school assignment example - I’m going to ChatGPT for answers. Right now, there’s a handful of large tech companies that have these AI systems. These companies are providing us answers and telling us how to reason through questions. In a recent interview, Jack Dorsey emphasizes this point:

“Five companies are building tools that we will all become entirely dependent upon. And because they are so complicated, we have no idea how to verify the correctness, we have no idea how to verify they work, what they’re actually doing...every single day, someone will encounter an intelligence that is interacting with them or dictating what they do or don’t do with their day.”

To address this concentration of power, Dorsey proposes open-source solutions. This approach allows users to choose their worldview, select different algorithms, and gain insight into how these systems operate. While I generally support open-source initiatives, it's important to recognize that these major tech companies will continue to release increasingly sophisticated models, likely maintaining their market position. I use LLMs almost every day, and I believe in their potential to positively impact society over time. However, it is crucial to engage with them critically, applying a healthy dose of skepticism and careful consideration regarding which tasks are appropriate to delegate to AI. This balanced approach enables us to unlock the benefits of these technologies, while remaining cognizant of their limitations and the implications of mental offloading.

Next
Next

On AI Voices and Learning