Psycholinguistics Seminar

心理語言學專題討論

Spring 2025       Thursday 14:10-17:00         文學院 (Humanities) 413

編號 (Course code number): 1309400

 

UPDATED 2025/9/19

 

Other Web resources

  

Me:

James Myers (麥傑)

Office: 文學院247

Tel: x31506

Email: Lngmyers at the university address

Office hours: Thursday 10:00-noon, or by appointment (made at least 24 hours ahead)

 

Goals:

You will read and discuss psycholinguistic research and then conduct and report on your own. No prior experience with psycholinguistics is necessary. The theme of the main readings is the role of deep learning (AI) models in psycholinguistics, but we will discuss other topics as well.

 

Grading:

10% Class participation

40% Leading discussion

10% Presentations (12/18)

40% Term paper (due Wednesday 12/24)

 

What the class is like:

This class is a discussion class. All we will do is read papers (real ones, not from a textbook) and discuss them together. So class participation means you discuss: you read, think, talk, and respond to others’ ideas.

Every week somebody will lead the discussion on the week’s reading(s), using a handout with questions to inspire us to discuss together. The questions should be organized in a logical way to make sure we address the most important issues in the paper, situating them in a larger context, but your questions should also allow us to clarify smaller points in the paper that may be confusing. You are encouraged to ask questions that even you don’t know how to answer, but you are the one responsible to bring the focus back to the big issues if we get lost. Post your questions to the eCourse2 system by 12 noon on class day, so I can download them and show them in class.

By 11/20 (but the earlier the better), you should choose a topic of your own to write about. The only restriction is that it has to report your own new psycholinguistic experiment on real people, with or without any relation to AI. After you choose your topic, the class discussions will then turn to focus on papers that you choose to help you with your project.

At the end of the semester (12/18), you’ll give a conference-style presentation about your research; the precise length will depend on the number of students. The paper is due six days later (Wednesday 12/24) as a PDF emailed to me by 5 pm. The paper should be about 10-20 pages, in “English”, with formatting like the real papers we read. I’ll grade them in the usual way (style, logic, theory).

Obviously, you should submit your classwork on time and shouldn’t plagiarize. Moreover, despite the class theme, when writing your discussion questions, presentation, and term paper, you cannot use AI for anything except English help. Even if your paper is about AI, the paper itself must be created solely by you (see Kosmyna et al., 2025, for some reasons why).

 

Schedule

* marks due dates for things relating to your paper

Week

Topic/Activity

Readings

Leaders

9/11

How can psycholinguists use AI?

(no reading)

Myers

9/18

What were the psycholinguistic debates over network models before deep learning?

Pinker & Ullman (2002a,b)

vs. McClelland & Patterson (2002a,b)

Myers

9/25

How do LLMs work?

Alammar (2018)

Myers

10/2

Do LLMs affect human language behavior?

Kosmyna et al. (2025)

邦佑/惠如

10/9

Do human brains work like LLMs?

Goldstein et al. (2022)

Caucheteux & King (2022)

柏承/品優

10/16

Do LLMs show human-like linguistic behavior?

Chang & Bergen (2024)

Linzen & Baroni (2021)

岳熹/又睿

10/23

How do AI models process semantics and pragmatics?

Michaelov & Berge (2022)

Reimann & Scheffler (2025)

雨潼/柏承

10/30

Can AI models learn word form patterns?

Vitevitch (2025)

Silfverberg et al. (2021)

邦佑/于芮

11/6

Dimitrios Meletis online presentation

TBA

 

11/13

Do deep learning models learn language like human children?

Vong et al. (2024)

Cao et al. (2025)

岳熹/品優

*11/20

Discuss paper topics

 

 

11/27

Your choice

TBA

TBA

12/4

Your choice

TBA

TBA

12/11

Your choice

TBA

TBA

*12/18

*Presentations [last class]

 

 

*12/24

(Wed)

*TERM PAPER DUE

 

 

 

Readings

Alammar, J. (2018). The Illustrated Transformer [Blog post]. https://jalammar.github.io/illustrated-transformer/

Cao, S., Xu, Y., Zhou, T., & Zhou, S. (2025). Is ChatGPT like a nine-year-old child in theory of mind? Evidence from Chinese writing. Education and Information Technologies, 30(5), 5787-5811.

Caucheteux, C., & King, J. R. (2022). Brains and algorithms partially converge in natural language processing. Communications Biology, 5(1), 1-10.

Chang, T. A., & Bergen, B. K. (2024). Language model behavior: A comprehensive survey. Computational Linguistics, 50(1), 293-350.

Goldstein, A., Zada, Z., Buchnik, E., Schain, M., Price, A., Aubrey, B., ... & Hasson, U. (2022). Shared computational principles for language processing in humans and deep language models. Nature Neuroscience, 25(3), 369-380.

Kosmyna, N., Hauptmann, E., Yuan, Y. T., Situ, J., Liao, X. H., Beresnitzky, A. V., ... & Maes, P. (2025). Your brain on chatgpt: Accumulation of cognitive debt when using an ai assistant for essay writing task. arXiv preprint arXiv:2506.08872. https://www.brainonllm.com/

Linzen, T., & Baroni, M. (2021). Syntactic structure from deep learning. Annual Review of Linguistics, 7(1), 195-212.

McClelland, J. L., & Patterson, K. (2002a). Rules or connections in past-tense inflections: what does the evidence rule out? Trends in Cognitive Sciences, 6(11), 465-472.

McClelland, J. L., & Patterson, K. (2002b). ‘Words or Rules’ cannot exploit the regularity in exceptions. Trends in Cognitive Sciences, 6(11), 464-465.

Michaelov, J. A., & Bergen, B. K. (2022). Collateral facilitation in humans and language models. In Proceedings of the 26th Conference on Computational Natural Language Learning (CoNLL), pages 13-26, Abu Dhabi, United Arab Emirates (Hybrid). Association for Computational Linguistics.

Pinker, S., & Ullman, M. T. (2002a). The past and future of the past tense. Trends in Cognitive Sciences, 6(11), 456-463.

Pinker, S., & Ullman, M. T. (2002b). Combination and structure, not gradedness, is the issue. Trends in Cognitive Sciences, 6(11), 472-474.

Reimann, S., & Scheffler, T. (2025). The struggles of Large Language Models with zero-and few-shot (extended) metaphor detection. Journal for Language Technology and Computational Linguistics, 38(2), 97-109.

Silfverberg, M., Tyers, F., Nicolai, G., & Hulden, M. (2021, June). Do RNN states encode abstract phonological alternations? In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (pp. 5501-5513).

Vitevitch, M. S. (2025). Examining Chat GPT with nonwords and machine psycholinguistic techniques. PLoS One, 20(6), 1-20.

Vong, W. K., Wang, W., Orhan, A. E., & Lake, B. M. (2024). Grounded language acquisition through the eyes and ears of a single child. Science, 383(6682), 504-511.