The Conversations We Need for Education in 2025
December 19, 2024 BlogAt Tyton Partners, we occupy a unique vantage point within the education sector. We act as a strategic…
Last week, we published our annual Time for Class study, which focuses on the state of digital learning and AI in higher education. The study examines how faculty and institutional leaders are using digital learning tools in teaching and learning to implement practices that can improve student outcomes. This year’s edition also gathered the student perspective, revealing important disconnects between educators and learners. These surveys took the pulse of higher education regarding the rise of artificial intelligence tools like ChatGPT. This month, we aim to translate that perspective into considerations when evaluating investment opportunities.
The ongoing wave of hype around artificial intelligence has given rise to cloudy narratives regarding both the upside and downside of this technology’s impact on education. While AI technologies could transform many aspects of education, administrators and instructors – particularly in higher education institutions – are facing clear, immediate-term challenges.
One early area of concern exists around issues of academic integrity. Institutions have historically addressed issues of academic integrity through adoption of anti-plagiarism tools and assessment security solutions. However, since ChatGPT’s release in November, “preventing student cheating” has become the top instructional challenge for faculty, remarkably jumping up from the tenth-most challenging issue in a single year.
These concerns are in part justified: students are using generative AI tools at much faster rates than administrators, instructors – or their curricula and assessments – can keep up with. In March 2023, roughly 100 days after the launch of ChatGPT, 27% of higher education students reported being regular users of generative AI tools, compared to only 9% of instructors and 8% of administrators. Moreover, institutional policy responses have been slow, with only 3% of institutions reporting having a fully implemented AI policy during the same period. Whether or not policies will be effective once in place is another thorny question; survey data indicates that students would continue to use AI tools regardless of institutional or course-level guidelines or restrictions, suggesting that while policies are necessary, they are not sufficient.
While the academic integrity discussion brought on by generative AI has taken center stage, other important areas of concern remain that administrators expect will garner increased funding. From a student success perspective, administrators’ top areas where they anticipate increased funding devoted to including:
Investors should ensure that any company they are investing in using AI is using the technology to solve an institutional problem where dollars are flowing to, as opposed to solving a lower-priority challenge because the technology is capable and flashy.
Since the launch of OpenAI’s APIs, small and large companies alike have begun adding AI large-language model capabilities to their existing product suite – Khan Academy’s Khanmigo and Chegg’s CheggMate are simply two examples – putting these functionalities into many hands quickly. Despite the institutional education sector’s tendency to slow adoption of innovation, these inexpensive, easy-to-use APIs could lead to many more providers experimenting and touting new functionalities. With that, it will be critical that innovators experiment safely – with efficacy and student and instructor experience top-of-mind – as these base models are generally not tuned for safe educational use out of the box.
Further, investors should stay disciplined in their pursuit of sustainably differentiated offerings, as it remains unclear if this type of API plugin is enough to create fundamentally new business models, or simply a new product feature.
As Reach Capital noted, “Simply building an API to GPT-3 or another out-of-the-box model, and adding some UX wrapper around it, is not a strong differentiator.” Companies like Khan Academy and Chegg have vast amounts of vetted, proprietary data that allow them to “tune” base models to improve their safety and efficacy in educational contexts. Further, large players have an advantage simply due to scale, as the tuning process requires rapid iteration on customer test data.
While investors should be cautious of getting caught up in the AI “noise,” the promise and potential regarding this technology justifies exploration. Tyton recommends taking a principled, fact-based approach to evaluating opportunities in this space. More specifically, look for companies that …
Tyton is closely monitoring as generative AI continues to make waves across education. Be sure to check out our recent piece that highlights K-12 educators’ perspective on AI in the classroom.
As always, we welcome the opportunity to continue this discussion.