ChatGPT and Beyond: The Best Online Resources for Evaluating Research...

If we’re going to make “research-based” claims about education — and we are! — then we better have the research goods.

So, how can we at Learning and the Brain — and you as teachers and administrators — find and evaluate research?

For a few years now, I’ve relied on two great sources. helps me see if a particular study is an outlier, or in alignment with other research findings. shows a spider web of research studies on any given topic.

They’A neuron made up of walls of binary code: 0s and great, and you should check them both out. (I’m on those sites several times a week.)

But, we’ve heard A LOT about AI in recent weeks. Can we rely on newer websites to help us out?

I spent an hour or so on Monday playing around on two sites; here’s what I found.

The Inevitable ChatGPT

I won’t rehash all the Twitter agita on this subject, but gosh ChatGPT has gotten lots of attention. (Twitter tells me it’s either the savior of education, or the end of education, a deranged stalker of New York Times reporters, or a nothingburger.)

Can it help us explore research into education?

To answer this question, I asked Chat GPT to describe “schema theory.” Here are the first two paragraphs of its answer:

“Schema theory” is a cognitive psychology theory that describes how people understand, organize, and store information about the world. The theory suggests that people use mental frameworks or “schemas” to categorize information and simplify their understanding of the world around them.

A schema can be thought of as a mental structure that represents knowledge about a particular category of information, such as objects, events, or social interactions. For example, a person might have a schema for a “restaurant” that includes information about the typical layout, menu items, and social norms associated with dining out.

So far, so good. (The restaurant example of schema is quite common.)

I asked it to dig a bit deeper, and to identify problems or limitations with schema theory. Again, I got a solid set of answers. For instance, a limitation:

Overgeneralization: Schemas can sometimes lead to overgeneralization, causing people to perceive things in a stereotypical or biased manner. For example, people with a schema of a “criminal” may unconsciously associate criminal behavior with certain groups.

When I asked it for research supporting schema theory, its first three answers were several decades old, so I asked it for research from the last 20 years.

Of the three it listed, this one looked most promising:

Lee, J. J., & Anderson, N. (2015). The role of schema in the interpretation of ambiguous scenes. Memory & Cognition, 43(8), 1186-1197.

But, strangely, when I tried to find that study on Google Scholar, it didn’t seem to exist.

Stranger still, when I went to the website for the journal Memory and Cognition, it also doesn’t exist. (To be clear: the journal exists; the article doesn’t exist in it.)

You’re not going to believe this, but it seems that ChatGPT simply MADE UP a research study supporting schema theory.

The Chat Gets Awkward

I pointed out to ChatGPT that the source doesn’t exist, and it responded:

I apologize, it seems I made a mistake. The study by Lee and Anderson (2015) that I mentioned in my previous answer does not seem to exist. As a language model, I strive to provide accurate and up-to-date information, but sometimes I may make errors or misunderstand the information given to me. I appreciate your feedback and will take this into account in my future responses.

Okay, I’m not happy about this. But it did fess up to its mistake, and promise to do better.

So, the conversation continued:

Me: Can you identify recent studies — within the last 20 years — that study the cognitive psychology of schema theory specifically for students learning academic topics?

ChatGPT: Yes, there have been recent studies that have investigated the role of schemas in students’ learning of academic topics.* Here are a few examples:

Kail, R. (2017). Schema theory and its implications for the design of learning environments. Review of Educational Research, 87(1), 93-135.

Well, that first one looks promising. But, I don’t want to be fooled twice, so I asked the obvious question:

Me: Does the Kail 2017 study really exist?

ChatGPT: Yes, the Kail (2017) study does exist. It was published in the Review of Educational Research, a highly respected peer-reviewed journal in the field of education.

Reader, the Kail study does NOT exist! ChatGPT just MADE UP A SECOND STUDY. And, having made it up, it then stuck by its (untrue) story.

I won’t go through all the iterations of this journey in detail.

But, yes, I asked about a third study. Yes, ChatGPT assured me it exists. No, it really doesn’t.

So, at this point, I’m kinda done. Any artificial intelligence that simply conjures sources out of thin air…I myself won’t be relying on it soon.

If only there were another option…

Another Option!

At about this time, my friend Rob McEntarffer asked me if I’d tried (Important note: bad things might happen if you type “illicit” instead of “elicit.” I haven’t checked, but: be careful out there.)

Rob is wise in the ways of MBE, and so I tried it.

At this point, I’m as hopeful about as I am discouraged about ChatCPT.

Elicit asks users to frame fairly specific questions. It then looks for study abstracts that seem relevant to those questions, and reports back.

So, I asked:

“Can schema theory improve classroom instruction?”

I got a list of seven studies. All seven sounded exactly on topic. And — here’s some surprisingly good news — at least four of the studies exist!

I know because I downloaded them and printed them out. They are, in fact, in my “read this right now” pile.

Now, Elicit has a somewhat easier job that ChatGPT, because it’s answering narrow questions about research studies, not broad questions about ANYTHING.

But, I found it easy to use and — at least on my first test drive — reliable and helpful.


If you’re looking for online sources to find and evaluate research-based claims:

Add to and as useful research resources.

Until it stops making stuff up, avoid ChatGPT.

* Notice, by the way, that ChatGPT got the possesive (“students’ learning”) correct in this answer. That’s an unusual rule — “use the possessive case before gerunds” — and a difficult apostrophe: plural possessive. So, it’s not great with sources, but it knows from grammar and punctuation!

category: L&B Blog

8 Responses to ChatGPT and Beyond: The Best Online Resources for Evaluating Research...

  1. Pingback: New Worthwhile Resources On Using ChatGPT In The Classroom – GET SCHOLARSHIP AND VISA GUIDE

  2. Pingback: New Worthwhile Resources On Using ChatGPT In The Classroom - VIRGINIA BEACH Daily Journal

  3. Pingback: New Worthwhile Resources On Using ChatGPT In The Classroom - LANSDOWNE Gazette

  4. Pingback: New Worthwhile Resources On Using ChatGPT In The Classroom - BRANDERMILL News

  5. Pingback: New Worthwhile Resources On Using ChatGPT In The Classroom – Arlington Daily Info

  6. Pingback: New Worthwhile Assets On Utilizing ChatGPT In The Classroom - Portland Local News

  7. Pingback: New Worthwhile Resources On Using ChatGPT In The Classroom - RCC News

  8. Max says:

    The challenge in these early days of generative AI is that you can’t simply write “avoid ChatGPT” without better describing the tools you are using. Generative AI is now available at the free version of ChatGPT; that is. 3.5. ChatGPT-4, MS Bing, which uses ChatGPT 4 connected to the internet, Google’s Bard, which uses Palm (its own LLM) connected to the internet, and Open AI’s Chat GPT-4 (Plus), which is now rolling out plug-ins. Yes, the free version of chatGPT often hallucinates references….and all the other systems do to…so far. Even today, however, some are better than others. In every case it is extremely important to verify output. However, I think suggesting that people avoid “ChatGPT” does a disservice in that it will discourage exploration of tools that are disrupting all of us, especially those of us in education fields. It is imperative that people experiment with and learn about these tools. I would encourage clearly defining and describing which tools, and under what circumstances they are being used (prompts, etc.), and supporting more experimentation. The more we know about these tools, the better for all of us.

Leave a Reply

Your email address will not be published. Required fields are marked *