AI & Learning

An Op Ed from the ThinkerAnalytix Perspective

The Future is in Philosophy

The development and dissemination of artificial intelligence systems is forcing a widespread debate about what it means to learn and to work. Faculty worry that they’re losing grip on all the things that drew them to teaching in the first place: integrity, empowerment, and learning. Should students be able to write assignments with chatGPT as an editor, or even co-author? What is the value-add of person-to-person learning? 

Businesses consider what work can be replaced by automated systems, and employees worry about what that means for their industries and families. Should we be able to outsource tasks to sources like chatGPT, asking systems to write marketing copy, important emails, academic articles, even code?

What kinds of skills should be left to people?

These debates are rampant, and frankly a little stale.

Philosophers have a phrase: ought implies can. The idea is that if I tell you you ought to do something, a necessary assumption I’m making is that you’re actually able to do whatever it is I’m asking you to do. The contrapositive is also true. If you can’t do something, it would be unreasonable for me to tell you that you ought to do it. So if the future is one in which students will have to engage with AI, a future in which they can’t help but use these systems, perhaps because they’re required for their jobs or just so ingrained in everyday life that they’re unavoidable, it’s unreasonable, maybe even unethical, for leaders to assert that students shouldn’t learn how to use those systems during their formative years.

So, rather than ask what should be left to people, we might suggest that an alternate framing of the problem is more illuminating. Rather than focus on what AI takes away, focus on what the taking away affords. Instead of spending hours completing repetitive, monotonous tasks that feel robotic, even dehumanizing, the inclusion of AI leaves us space to ask: what activities bring us joy? What do you value?

AI can help you answer this question too, but it can’t dictate what your true, authentic values are, or how those values should shape your decisions. That unique capacity for reasoning about our values elevates our humanity and autonomy, and focuses our attention on the big, important questions about what makes life worth living. 

How do we ensure every person is able to reason well about these big issues? By returning to the ancient, almost pre-technological practice of philosophy: reasoning, repeatedly, and together, and learning from our (il)logical mistakes. 

So even in the face of smart systems and models, every person needs the capacity to think clearly about reasons. That is what our organization aims to do at scale through partnerships with schools, universities, professional organizations, and experts in AI, teaching, and learning.

At ThinkerAnalytix we see technology not as a threat but as a tool. 

A phone in every student's hand and a laptop on every (home) office desk means that high-quality and essential skill-building opportunities can be made available to more people, at lower cost.

We try to do what the ancient schools of philosophy did, asking learners to engage their reasoning skills about current issues and essential debates, but do so via immersive and accessible custom-build online learning platforms.

We see the release of AI systems like chatGPT as a massive, important moment. It’s an opportunity to equip and empower learners with skills to manage and enhance how they spend their time.

And outside of classroom walls, the world is asking for these skills. Managers report the desire to outsource more menial tasks to AI. They want an AI-literate team. But they also report an increasing desire to find hires who have strong critical thinking (read: reasoning) skills.

So what skills will people need to engage with these new technological systems? First, they have to know how to use these systems effectively, and understand their power and limitations. Rather than bury our heads in the sand, educational institutions and professional learning departments need to be leaders in preparing learners to be members of an increasingly technological society. And that means forging a stronger interface between where we learn and where we work. ThinkerAnalytix bridges this gap by building learning opportunities and programs for students, for faculty, and for professionals based on input and guidance about what they really need. 

Second, we need to empower learners with the skills they need to be critical consumers of technology. Not only should learners be able to ask whether the claims spit out by an AI bot are true, they should be able to ask deeper social, political, and moral questions about the role these systems play in our world, and answer those questions with their own voice. They need to be able to think about the ethical dimensions of the technology they use, and about how they see themselves as a person of worth and dignity in a technological world.

In essence, every person needs to be a philosopher.

How can we achieve it? By opening up the debates about AI currently happening in faculty meetings, published academic articles, and online blogs to include those who stand to gain (and lose) the most from our policy decisions: our students.

But there’s a prerequisite. To be able to engage these debates, students need to build the uniquely human capacity for reason. And that’s what we do (uniquely) at ThinkerAnalytix.

We can’t build these skills through traditional instruction. We have to harness technology– not just AI, but any technological tools that will make these skills accessible to thousands, even millions of students simultaneously. Technology coupled with front-edge pedagogy from top research institutions is what we need, and what learners deserve, to be modern day philosophers in an AI-enhanced world.

Future Work

Our team is currently working to enhance and expand our programs by leveraging AI tools. 

Over the next year, we plan to: