Artificial Intelligence in K-12 Schools: Are Educators and Ed-Tech Developers Overlooking the Biggest Risk? (And Biggest Opportunity?)

Image showing a two-headed medieval Janus Figure-EdPro Communications-blog post 2020

This post was originally published in June 2020, and was updated October 31, 2025.


Editor’s Note (October 2025):
This piece was initially posted in June 2020, before generative AI tools like ChatGPT and GPT-powered “AI tutors” showed up in nearly every classroom conversation. In the past two years, teachers started using AI to draft lesson plans, design quizzes, translate parent emails, and even speed up grading and feedback, often to claw back time and reduce burnout.
At the same time, districts rushed (and are still rushing) to create “responsible AI use” guidelines, mostly on the fly — about plagiarism, data privacy, bias, and who owns student work — and more than 30 U.S. states have now published formal K–12 AI guidance or policy frameworks.


In effect, the headlines tend to focus on hype (“AI tutor for every kid!”) or panic (“cheating, bias, surveillance”), but the deeper tension I describe below hasn’t changed: ed-tech teams often arrive with a “solution,” while school systems are stuck managing the human, legal, cultural, and political realities of actually implementing change with children. That friction is still the biggest opportunity — and still the biggest risk.


When educators allow themselves to fall into this same mindset they may fail to question it and instead plunge headlong into the underlying fallacy: seeing change through this lens they cling to being a voice for protecting children and sounding alarm bells, rather than attending to the hard and urgent work that AI will never solve on its own: that of achieving complex and dynamic education reform and improvement goals and that of living up to and sustaining intangible and ever-elusive humanistic and social aspirations.


 

Whether we like it or not, the AI wave is coming here...

Although Ed-tech developers may have big ideas, many educators have big concerns...Past lessons suggest that new and better tools won't necessarily bring meaningful and sustained school improvements.

In this article I consider whether a critical "mindset disconnect" between B2B Technology Developers and Educators might hold at least one key to avoiding the biggest risks and unlocking the biggest long-term rewards and opportunities as schools purchase innovative AI solutions.

 The McKinsey Institute has projected that an astonishing 70% of companies worldwide will have implemented one or more types of AI technology solutions between now and 2030, a wave of innovation predicted, in the same forecast, to generate some $13 trillion in economic activity.

But AI isn't only about economic activity, it's about creating unprecedented opportunities for strategic action, growth, reform, and responsiveness in any sector of human endeavor: problem-solving...medicine...forecasting...(re-)distributing or accumulating...regulating...predicting and preventing...innovating (and accelerating the speed of innovation), teaching...learning...producing...All of these modes of action define much of individual and social activity and all can be improved if not revolutionized not only by the application of human intelligence (a range-bound resource) but by the possession of more data and with respect to how accurate, how timely, and how comprehensive or how granular one can make the data. Because AI will revolutionize this last part, the data part, it will have dramatic impacts on all the other human endeavors and the policy and governance structures that direct them.

Because of this game-changing power and because AI disruption is taking us into uncharted territory, the general consensus today is to see the penetration of AI into K-12 education in terms of a fairly stark dichotomy of infinite solutions vs. worrisome risks and unintended consequences:

On one side of the dichotomy are the possibilities for solutions and efficiencies:
__ reduction of time-consuming tasks
__ administrative efficiencies and consolidations
__ rapid scaling of adaptive, responsive, differentiated, and individualized learning solutions
__ immense scaling and centralizing efficiencies
__ diagnostic efficiencies
__ automated natural-language communication and translation
__ real-time, disaggregated formative assessment data and summative assessment data and analytic tools

In the opposing column, are any number of risks and threats and any number of unintended consequences:
__ privacy risks
__ child harassment and abuse
__ turning children into screen zombies
__ reducing learning to endless sequential skill-acquisition tasks devoid of meaning and purpose for the learner
__ failing to engage learners with sufficiently complex higher-order learning challenges
__ social-emotional deprivations
__ unequal access to ed-tech resources and by extension to equal opportunity
__ implicit bias
__ loss of diverse perspectives resulting from centralized design and from deployment of vastly-scaled-and-integrated AI platforms

 

The disruptive aspect of change and the speed and scale at which modern digital technologies transform practice are no doubt among the reasons that the anticipation of AI tools in K-12 schools evokes both exuberant optimism in some camps and lots of red flags in others.

Perspective and one's relation to technology disruption also play a role...

Ed-tech developers are typically inclined to focus on (and rewarded for) imaginative and quantum-leap-type innovation dreaming and prototyping. Theirs is an endless world of possibilities and one focused on overcoming technological barriers to further innovation.

Educators (and parents and child advocates) tend to be immersed in quite different habits of mind and are traditionally rewarded for different kinds of judgment and for coping with very right-now, day-to-day tasks (something AI developers want to help change!)...Educators must also be keenly aware of the "liabilities" that lurk around all of their decisions, actions, speech, innovation and learning experiments, and must regularly conform action to any number of federal, state, and local education laws and regulations. In addition, they must negotiate these demands in the context of daily informal and formal relational and value-laden interactions with children and their communities, in any range of social, racial, demographic, economic, and political settings.

The default is to see this dichotomy in rather static terms:

ed-tech developers as dreamers and optimistic actors...

and

educators as risk-focused, risk-adverse, and sometimes too small-minded actors...

Part and parcel of this way of thinking is to see most obstacles to innovation largely if not exclusively in terms of dysfunction in school organizations. Indeed, some people have been so sure about this inherent aversion to or incapacity for change and innovation in school organizations that whole charter systems were erected as a work around. Hence, as an opinion piece in Forbes magazine suggested, "Education might be a bit slower to the adoption of artificial intelligence and machine learning, but the changes are beginning and will continue" (Marr).

This perspective, the idea that change will come presumably despite schools, corners the educator: the implicit message it betrays about public perceptions and assumptions is that innovation offers all these great solutions and efficiencies, but students will have to wait longer than others to reap the benefits because schools are slow to innovate. In some cases, the implicit messaging includes following this logic to its end point: with the advent of tools this powerful (AI tools), it's hard to even know what meaningful tasks are left to assign to teachers or to the role of human teaching. One ed-tech reporter, although I think he genuinely sought to express his belief that human teachers will still have critical roles to play alongside AI, could only muster a platitude in this regard: "there's no replacing the human aspect of our teachers..." (Lynch). (Not sure what a human aspect is meant to refer to.)

When educators allow themselves to fall into this same mindset they may fail to question it and instead plunge headlong into the underlying fallacy: seeing change through this lens they cling to being a voice for protecting children and sounding alarm bells, rather than attending to the hard and urgent work that AI will never solve on its own: that of achieving complex and dynamic education reform and improvement goals and that of living up to and sustaining intangible and ever-elusive humanistic and social aspirations.

To the extent that educators and parents lose sight of this and lose sight of their relevance in even a highly automated learning environment, the fallacy now becomes a self-fulfilling prophecy as well. But what is really needed is more dreaming and more intellectual (not necessarily technological) risk taking as well as aspirational creativity and experimentation on the side of the school and educators (and parents with them).

"The presence of risks does not need to remove our optimism; instead it can be a force for developing a more mature goal" (Piech and Einstein).

Indeed, risks are real however...

Typically, given the sensitive and highly-regulated nature of educators' relational interactions with young children, educators do bear a risk burden far greater than those of the ed-tech innovator. But before talking more about educators, let's look at the other side of the fallacy, the notion that ed-tech holds all the promises, delivers all the solutions..., while the risks are assigned primarily to educators. In fact, giving in to this fallacy has risks and costs for ed-tech and for society at large too.

Why Schools Struggle to Innovate (and Why That’s Not the Whole Story)

A K-12 innovation parable for our times: The Gates Foundation’s 2013 New York City Schools inBloom Technology Initiative

Let's take for example the Gates Foundation's 2013 inBloom technology initiative in New York City schools and see what it reveals about ed-tech mindsets and ed-tech risks and failures (even when the "tools" work...).

This initiative, supported by funds from the Gates Foundation, was intended to deliver more timely and useful data for monitoring student learning--data that provided a logical and needed foundation for more effective, responsive, and targeted instructional planning, in alignment with education advocates' understanding that effective instructional design requires access to timely data about student learning and student progress (what educators refer to as formative assessment data).

Our Mission: inform and involve each student and teacher with data and tools designed to personalize learning

— InBloom website (source: Haimson)

Funded in 2011, implemented in New York City schools in 2013, the initiative was abandoned by the school system in 2014... Not exactly a model of sustainability.


“Rising skepticism among parents…”

“Public confidence in AI is declining, especially among parents…”

Fast forward to generative AI in 2024–2025… notice a familiar pattern? A vendor shows up promising transformational efficiency — automated feedback, automated translation, automated tutoring — and district leaders feel pressure to “act fast” because AI is everywhere in the news.

Meanwhile, parents are asking, very reasonably: Are you training an algorithm on my kid’s essays? Who sees that data? What happens if the bot gives my child biased or inappropriate guidance? Surveys show that a majority of parents are uncomfortable with schools sharing their children’s data with AI systems, especially when it’s not clear how that data will be stored or used.

The surface narrative often becomes “district drags its feet,” but the underlying dynamic is the same one that helped kill inBloom: you cannot bolt a powerful data-driven platform onto a public school system without first building trust, shared goals, and actual co-ownership with the humans who have to fit the new tools into a complex school setting and local school culture.


When Technology Meets School Culture

The initiative as a whole reflected an intervention mindset that some reform thinkers argue is flawed: a one-to-one attribution mindset. It's a traditional way of thinking in the context of technology development: you identify a "problem" and come up with a technology that delivers a "solution." In theory, the better the technology is defined by and adapted to the parameters of "the problem," the better the deliverable will be...

A number of educational researchers who study organizational dynamics and innovation- and change-oriented leadership strategies in school settings, however, have cautioned against this type of linear thinking and implementation mindset in schools. It's one thing to automate attendance record keeping...It is also one thing to transform the mechanical logistics of testing and test scoring...

To transform learning cultures and long-term student outcomes is far more complex...

It really isn't that schools are inherently "fatal" to innovation or technology adoption efforts, it's that deeper innovation in school culture and in day-to-day attitudes and practices relating to teaching and learning requires the concurrent development of organizational routines, mechanisms, and interventions in order to align (and regularly adjust and re-align) multiple dynamic practices, external change forces (which are always shifting), diverse individual, group, societal, political, and cultural constituencies and interests, and myriad logistical and regulatory constraints and compliance mandates all at the same time AND in an overarching framework and hierarchy of coherent goals, needs, challenges, and objectives negotiated actively and robustly across numerous channels, teams and factions, individuals, standing policy frameworks and so forth...within larger networks of individual school sites that operate within a larger system (typically a school district, but also a county and a state, and so forth...). All of these complexities require re-negotiation as well at various junctures as school systems seek to consolidate ideas and professional learning and social pressures into new aspirational goals and implement new practices or adopt new learning targets.

It can all sound abstract and falsely complex, until it's not. For example, when an education advocate who also happened to be an inBloom insider published a post-mortem of the failed initiative in Forbes, he pointed out that some reasonable concerns about student privacy attributable to the data platform were amplified and whipped up further by external pressures of the moment--namely a social and media climate where fears about privacy were already running high (Horn).

The technological issues or design issues in the inBloom initiative were perhaps insurmountable, but the fact is that the lack of skilled adaptive leadership in negotiating the adoption process and "processing agendas of issues" (to borrow loosely from Fullan's writings on change-oriented leadership skills) meant that it is hard to know if data security as such was really a fatal technological flaw in this setting or really just a road bump that was poorly managed in terms of implementing the technology tool and negotiating the concerns and interests of distinct constituencies. What those reporting on these past events do seem to make clear is that the larger external concerns about privacy on the one hand, when added to the utter failure to involve stakeholders (parents especially in this case) and to elicit and address their "agendas and issues" during the design, adoption, and implementation stages of the initiative, did prove fatal.

According to some reports, the CEO of inBloom, instead of taking any responsibility for the lack of participatory mechanisms, put the blame on safety advocates for sabotaging the effort. This is a perfect example of how the public discourse shapes opinion: because schools are tasked with coordinating and actually responding to the divergent interests and constituencies that they are intertwined with, they are cast as opponents of change and innovation because they elected to take on the role of being proponents of responsible innovation after having been shut out of the design and decision-making process. In all fairness, the inverse can also be true sometimes: educators choose to insulate themselves from external change pressures and networks, including innovation technologies, in order, presumably, to simply make life easier or more manageable, to avoid triggering complex change dynamics within their organization...perhaps even to be rewarded by the systems they work in for maintaining harmony and stability...

A textbook case of "we all lose..."

The zero impact on learning and the insurrection of parents means that, arguably, schools and students were the biggest losers in the wake of the inBloom failure. But as this fallacy played itself out and as the ed-tech contingent cast itself as the driver of innovation and blamed educators and parents for exaggerated concerns about privacy risks, there were consequences for ed-tech aspirations and technology applications in schools as well, since the initiative barely survived one year once actual implementation work began.

As is not uncommon, other collateral damage follows in the wake of such debacles.

Time, money, and effort are squandered (in some of these initiatives it's lots of money). Likewise, professional morale in the school system probably took a hit too, and instead of building foundations for iterative learning, for machine learning in a K-12 setting, and for broader collaborations between a growing circle of stakeholders, folks returned to square one. The only deliverables were scathing editorials and acrimony within the school community and between ed-tech leaders and the school clients. And, in case you're thinking the inBloom initiative was a one-off in this respect, consider that one news reporter commenting on inBloom quipped that "public education is a legendary graveyard" for these kinds of "ambitious philanthropic plans" (Hall and Callahan).

In theory, one might wish that the educators could have learned something from this failed ed-tech and K-12 alliance. What we can presume is that educators didn't learn much or benefit much with regard to formative assessment data--the very aspect of instructional practice the technology platform was designed to support and modernize.

However, I think it's worth asking what still might be learned...After all, don't we teach students that we can all learn from our mistakes?

Whatever cautionary warnings educators bring to the table as the AI wave approaches, it probably matters little as things stand, and "change" will be inevitable and will, yes, bring both risks and opportunities of the kinds listed above.

But perhaps the biggest risk is educators marginalizing themselves from (or being marginalized from) the innovation and design aspirations and processes that will shape this and subsequent waves of accelerating innovation...such that past flawed implementation initiatives are repeated: some vendor will come and sell a solution with many task efficiencies to the district technology officer, distracting educators from the changes that are really needed in order to reap the benefits of these tools in a meaningful way that changes student experiences and achievements in terms of significantly statistical and meaningfully qualitative measures...leaving teachers to clean up the mess, and students to suffer the unintended consequences, and leaving educators and the school's most critical learning objectives and cultural values largely disconnected from the transactional decisions that may in fact define or redefine learning, or merely "determine" learning experiences and outcomes--the school fitting itself to the parameters and functionalities of the new platforms...

But how does this frame the way we measure the "change" potential of ed-tech tools? Is there any meaningful relation between a tool's "capabilities" and "features" as such and attempts to measure their value, in terms of deepening learning experiences and for the larger social benefits that accrue from transformative changes in teaching and learning that are truly socially adaptive and responsive?

Michael Fullan, arguably today's most prolific, informed, pragmatic, and hands-on reform researcher, thinker, and practitioner, has argued repeatedly that meaningful learning and likewise meaningful change in education settings will be driven only marginally by "tools," "efficiencies," or "solutions."

Meaningful change and change that actually adds "meaning" to educators' own work and to students' learning experiences involves not just "what tool" or "what efficiency" but more complex development of workplace culture, deeper organizational capacity building, and more skillful coordination of complex organizational dynamics (internal pressures and external forces) on behalf of leaders and other skilled change agents across school systems. This is not merely a value judgment with regard to the limited role "tools" and "solutions" will play. While it is rooted in certain value-driven visions of education and its core mission and social role, it is also an empirical finding--a conclusion based on engaged study of and active participation in co-leading school change efforts. If I were to translate Fullan's assertion on this point into more blunt language, it might go like this:

'You know what school leaders and technology vendors?..Stop thinking that seriously impacting learning outcomes will depend on whatever bells and whistles the newest ed-tech program, solution, or platform comes with because in fact all of these carefully developed tools will play at MOST a 25% role in effective change, and any tool or solution--no matter how unique or powerful or well designed--will have its potential impacts negated significantly or even entirely if the complexities of managing school change and school change dynamics are concurrently misunderstood or mishandled...'

Again, those aren't Fullan's words, that's my rendition of how I translate what Fullan is trying to yell from the rooftops...

This is a really powerful finding for affirming the essential role educators have in actually getting our nation's schools to really improve. And...it is a perhaps sobering reminder for ed-tech folks--despite all the dreams, platforms, and big promises...

It's not that ed-tech innovations can't realize these dreams, per se, it's that, in the end, the hopes attached to such dreams can be easily and utterly negated by the complex dynamics at work in school settings, where linear approaches to "problems" and "solutions" simply don't obtain. When new technology tools are adopted, ed-tech simply goes on to the next wave of inventions, rarely looking in the rear view mirror to learn from this "negation of innovations" phenomenon. Society finds that schools are changed but never changing (plus ca change, plus c'est la meme chose, as they say in France), but education (real student learning and flourishing) still doesn't change that much for the better (and the default response is to blame schools and complain again about the need to fix schools).

I hope readers may find, from their own perspective and roles "inside" or "outside" education systems, some useful takeaway in these reflections...


From Vision to Reality: Why Co-Design Matters

We’re already watching two possible futures emerge.

In one version, AI shows up informally, bottom-up. Individual teachers quietly use generative AI to draft lesson plans, create leveled practice materials, translate communications home to families, or speed up feedback — mostly to survive unsustainable workloads and staffing gaps. This is resourceful and deeply student-centered, but it’s also happening ad hoc and in isolation, without consistent guidance on data privacy, academic integrity, or instructional quality. Teachers are innovating, but there’s little collaborative learning and a real risk of insufficient safeguards.

In the other version, districts (and sometimes entire states) are trying to build official AI guidance, professional development, and guardrails. That’s a good instinct, but it’s also mostly driven — understandably — by a risk-management mindset rather than an effort to imagine new instructional possibilities. You get rules about what not to do, rather than clarity about how to do things better for students.

On the ed-tech and AI-developer side, there’s a parallel challenge. Many teams genuinely see ways to help: automation that could give teachers back hours, personalized support at a scale schools could never afford with only human labor, translation and accessibility tools that actually expand equity. They’re not wrong about the potential — but they may underestimate how fragile trust is in a public-school environment, how political decision-making becomes when diverse perspectives and factions must be reconciled, and how messy the real “job to be done” is once you’re inside a classroom full of kids with wildly different needs.

In the end, there’s nothing to be gained from an innovators-versus-educators mindset.

What we really have are two groups each doing triage from different angles:

  • Educators trying to protect students, dignity, equity, and instructional quality while keeping the day from falling apart.

  • Innovators trying to build tools that could genuinely ease pressure and open new possibilities — but that will fail (or get blocked) if they ignore governance, community trust, or the lived realities of implementation.

This is why co-design matters.

Not because teachers should merely be “consulted,” or because vendors inherently understand the tools better. Co-design matters because neither perspective alone is enough. The instructional vision, local school culture, and political-ethical accountability that educators wrestle with — but can also harness to drive lasting change — must meet the technical imagination, product velocity, and iteration muscle that the ed-tech side brings.

When both sides operate in silos, there’s little chance of achieving a truly effective integration of technology, instructional culture, and instructional leadership within real school settings.

The inverse is also true: when those perspectives merge early — in iterative problem-framing, inquiry, and co-design, not just during pilot week — change is more likely to gain traction, enjoying broader stakeholder support, scaling more sustainably, and generating valuable use-case narratives that that help other schools and districts adapt lessons learned to their own local contexts.


The take-away I want to propose today is that the many "risks and unintended consequences" associated with artificial intelligence notwithstanding, the biggest risk for us all is falling prey to misleading assumptions about educators being irrelevant or redundant as the wave of change crashes over schools (or educators letting themselves be fooled into thinking, deterministically, that they are irrelevant and redundant...).

Conversely, the biggest reward may not have anything to do with AI functionalities. Instead the biggest rewards might actually flow from educators acquiring enhanced capacity for driving powerful learning outcomes by becoming design partners and mutual, active school change agents in the context of technological innovation.

Moving Beyond the Educator vs. Innovator/Entrepreneur Duality…

We need to move past the oversimplified idea that educators and ed-tech innovators live in separate worlds.

Both operate within complex systems — and both are learning that meaningful innovation happens not in isolation, but through ongoing exchange between design, data, and lived experience.

Big tech has learned (sometimes the hard way) that design is no longer a “laboratory process.” It’s shaped by user habits, local contexts, and messy, real-world decision-making. The same lesson applies to education technology. Tools have no intrinsic power to transform learning apart from the cultural, organizational, and leadership conditions that shape their use.

Educators, for their part, are uniquely positioned to understand those conditions — the human dynamics, competing priorities, and leadership competencies that determine whether a new tool improves learning or simply adds noise. Many already step up to this challenge; others understandably hesitate. But as educators strengthen their capacity for organizational learning and embrace experimentation and inquiry, they become central partners in shaping what “schools of the future” will actually look like.

At the same time, ed-tech developers and their philanthropic sponsors can play a more constructive role when they engage in genuine partnership with educators — not as end users, but as co-designers.

Developers who listen closely to the “jobs-to-be-done” challenges teachers face every day gain insights that no needs-based market analysis can match. These challenges play out in real time, across shifting systems of policy, culture, and practice.

By engaging more deeply with school leaders, teachers, and specialists — through sustained and iterative collaboration — developers can design tools that are more responsive to real constraints and more likely to achieve durable impact.

And educators, by leaning into these partnerships, can extend their influence on the instructional innovation process — helping ensure that innovation in schools aligns with reliable and evolving pedagogy practice while proposing new ways to use edtech tools, and making sure edtech tools are integrated in ways that keep the educational setting human-centered, adaptive, and purposeful.


Innovation in education succeeds when both educators and developers move beyond narrow roles — educators embracing design and adaptive leadership, and developers understanding the human and organizational dynamics of schools.


Final Thoughts: Riding the Wave Together

If you’re an educator, don’t wait for the “perfect” AI policy or product to tell you what to do. Start small: explore a few tools, test their limits, talk openly with colleagues about what works and what worries you. Curiosity, not compliance, is what will keep teaching human and forward-looking.

If you’re an ed-tech innovator, slow down long enough to learn how schools actually make decisions — how trust, community, and accountability shape every choice. Spend time in classrooms. Listen before you design. The most successful AI tools will emerge not from faster coding cycles but from deeper empathy with the people who teach and learn every day.

The next wave of education innovation won’t be led by technology alone, or by educators alone. It will come from partnerships that treat experimentation, ethics, and impact as shared work — a collective act of learning in its own right.


We hope you found a valuable insight, or two, in this post.

Please share any thoughts or comments below, in the space provided.

Want to receive EdPro News & Updates? Simply enter your email address in the form provided below!

Want to talk? Use the handy form on our EdPro Contact Page to get started! Or, learn more about EdPro services here.


Sources:

Bughin, Jacques, et al. "Notes from the AI frontier: modeling the impact of AI on the world economy." McKinsey Global Institute, 4 September 2018, mckinsey.com/…/notes-from-the-ai-frontier-modeling-the-impact-of-ai-on-the-world-economy. Accessed 3 June 2020.

Fullan, Michael. Change Forces: probing the depth of educational reform. Falmer Press, 1993.Fullan, Michael. The New Meaning of Educational Change. 5th ed. Teachers College Press, 2016.

Haimson, Leonie. "Student education data collecting initiative inBloom puts sensitive information at risk" (Op-ed). NY Daily News, 15 March 2013, https://www.nydailynews.com/new-york/inbloom-education-data-cloud-jeopardizes-lives-new-york-students-article-1.1288189 . Accessed 3 June 2020.

Horn, Michael B. "inBloom's collapse offers lessons for innovation in education." Forbes 4 December 2014, https://www.forbes.com/sites/michaelhorn/2014/12/04/inblooms-collapse-offers-lessons-for-innovation-in-education/#17461bc0525f. Accessed 3 June 2020.

Koperniak, Stefanie. "Bringing artificial intelligence and MIT to middle school classrooms." MIT News, 30 December 2019, http://news.mit.edu/2019/bringing-artificial-intelligence-and-mit-middle-school-classrooms-1230#:~:text=MIT%20researchers%20piloted%20a%20new,and%20math%20studies%20and%20careers. . Accessed 2 June 2019.

Lynch, Matthew. "5 examples of artificial intelligence in the classroom." The Tech Advocate, 22 August 2017, https://www.thetechedvocate.org/5-examples-artificial-intelligence-classroom/. Accessed 2 June 2020.

Marr, Bernard. "How is AI used in education--real world examples of today and a peek into the future." Forbes, 25 July 2018, https://www.forbes.com/sites/bernardmarr/2018/07/25/how-is-ai-used-in-education-real-world-examples-of-today-and-a-peek-into-the-future/#39008c41586e. Accessed 2 June 2020.

McCambridge, Ruth. "Gates’ $100M Philanthropic venture inBloom dies after parents say 'no Way'." Nonprofit Quarterly, 22 April 2014 https://nonprofitquarterly.org/gates-100m-philanthropic-venture-inbloom-dies-after-parents-say-no-way/ . Accessed 3 June 2020.

Piech, Chris and Lisa Eisenstein. "A vision of AI for joyful education: here's how we can avert the dangers and maximize the benefits of this powerful but still emerging technology" (blog editorial) Scientific America, 26 February 2020, https://blogs.scientificamerican.com/observations/a-vision-of-ai-for-joyful-education/

Rauf, David. "Artificial intelligence in k-12 education: unintended consequences lurk, report warns." Education Week, 5 May 2020, http://blogs.edweek.org/edweek/DigitalEducation/2020/05/ai_report_blog.html. Accessed 3 June 2020.

Ravitch, Diane. The Death and Life of the Great American School System: how testing and choice are undermining education. Basic Books, 2016.

 
Previous
Previous

New Funding and Grants for Community Schools

Next
Next

THE DOLLARS AND SENSE OF HIRING A GRANT WRITER…