Authors, Titles, Abstracts, Presentations


IMPORTANT:
The ASIC speakers and attendees, whether world famous scientists or graduate students, expect to hear, and are used to hearing state-of-the-art leading-edge research. However: ASIC is an interdisciplinary conference and always has a diverse audience, Thus DO NOT give a talk aimed at your co-authors, laboratory colleagues, or even experts in your research domain: GIVE A TALK ACCESSIBLE TO AND UNDERSTANDABLE BY THE DIVERSE ASIC ATTENDEES.

Please email kmanalo@iu.edu if you need to make edits to your submission.

List of Submissions

Speaker Title Abstract Author(s)

Stephan Lewandowsky

University of Bristol

stephan.lewandowsky@bristol.ac.uk

Gender quotas and meritocracy The U.S. administration has taken strong action against what they call “discriminatory” Diversity, Equity, and Inclusivity (DEI) measures by public institutions as well as private corporations. The anti-DEI stance appears to be (at least ostensibly) motivated by a commitment to merit as the sole basis for hiring decisions and performance evaluations. Indeed, at first glance, measures such as gender quotas (requiring a balance between men and women in an organization or team) appear to run counter to relying solely on merit as a criterion for hiring or promotion. On closer inspection, however, it turns out that gender quotas frequently increase performance of a team or an organization because the use of quotas broadens the pool of competent applicants and prevents mediocre individuals of a dominant group (usually men) from being hired or promoted. We review the literature pertaining to gender quotas and show that quotas very often have a beneficial effect on performance. We then present two studies that show that an empathy-based approach that affirms people’s basic values can shift people’s attitudes towards gender quotas in a positive direction, including among people who were against gender quotas before the study commenced, when the compatibility of quotas with meritocracy is explained.

Stephan Lewandowsky

University of Bristol

Dawn Holford

University of Bristol

Klaus Oberauer

University of Zurich

k.oberauer@psychologie.uzh.ch

Steps to a computational model of working memory I will outline a computational model that explains some benchmark findings about working memory (WM). The model builds on the following assumptions. Items of a memory set are encoded sequentially by creating temporary bindings between their elements. Bindings are mediated by a population of neurons. Each item recruits a constant proportion of available binding neurons, which are committed exclusively to that item. This creates a primacy gradient of binding strength. Encoding of each new event releases some committed binding neurons, creating a recency gradient of binding strength. Response selection is governed by a signal-discrimination process. Together, these assumptions explain serial-position effects and the set-size effect. Free time after encoding an item allows for the gradual release of those committed neurons that contribute the least to encoding the item. This pruning process frees up binding capacity for subsequently encoded items with hardly any cost for preceding items. This assumption explains the beneficial effect of free time, and the cognitive-load effect in complex span tasks.

Klaus Oberauer

University of Zurich

Angela Nelson Lowe

University of California, San Diego

a1lowe@ucsd.edu

Memory Models Can Read: Combining Computational Models of Memory & Reading with SARK-R The process of learning to read has been highly studied in scientific literature, with some scholars suggesting that the investigation of the teaching of reading began over 400 years ago (Fries, 1964). Since the emergence of psycholinguistics in the 1950s, many psychologists have studied this process, however, few if any of these investigations evaluate the process of learning to read in the context of what is already known about memory. The purpose of this project therefore is to use existing memory model architecture to provide a different and potentially valuable way of describing the process of learning to read. In this talk I will describe a new extension of the SARKAE memory model (Nelson & Shiffrin, 2013), termed SARK-R, that simulates the process of learning to read using the model’s interactive connections between episodic and semantic memory systems. By relying on the interactions between knowledge and events, the model can provide an explanation for various components of the learning to read process such as building an orthographic lexicon that includes differential variability for a letter’s visual information and sound information, the self-teaching hypothesis of whole-word recognition, and characteristic error patterns in developing readers. This talk will describe the model structure and simulation processes, fits of the model to existing findings, and possible extensions that could test the model assumptions with behavioral studies.

Angela Nelson Lowe

University of California, San Diego

Sabrina Y. Ha

University of California, San Diego

Anne S. Yilmaz

University of California, San Diego

Ian Krajbich

UCLA Psychology

krajbich@ucla.edu

The dynamics of open-ended decisions In open-ended decisions, options are often ill-defined and must be generated by the decision maker. How do people make decisions without predefined options? Do people form a consideration set first and choose from it? Or do they evaluate the options while recalling them from memory? Our study answers this question using a novel two-session experiment with 30 decision categories. We find that options that are recalled earlier and options that are more favorable are chosen more often and also chosen faster. The computational model where decision makers decide while searching through their memory can account for choices and response times (RT) in open-ended decisions and can accurately capture the choice share differences under different time constraints. Together, our behavioral and model-based findings shed light on the cognitive mechanisms of memory-based decisions, advancing the understanding of open-ended decisions.

Xiaozhi Yang

University of Pennsylvania

Zhihao Zhang

University of Virginia

Ming Hsu

UC Berkeley Haas

Ian Krajbic

UCLA Psychology

Thomas Palmeri

Vanderbilt University

thomas.j.palmeri@vanderbilt.edu

Neurocomputational model of neurophysiology, electrophysiology, and behavior We showed recently that our neurocomputational model of target selection during visual search called Salience by Competitive and Recurrent Interactions (SCRI) accounts for neurophysiology (single-unit neural spiking activity) and, when combined with our Gated Accumulator Model (GAM), accounts for saccade response times in awake behaving non-human primates. SCRI assumes that a salience representation is produced by interacting computations of detecting each stimulus (localization) and matching each stimulus to the target (identification). We show that the temporal dynamics of representations in SCRI not only accounts for neurophysiology and behavior but also explains the timing and amplitude of a non-human primate analogue of the N2pc, a key electrophysiological neural correlate of attention. We also observe that mathematically opposing identification and salience signals are necessary to account for the variation of N2pc latency and amplitude with the number of distracting items during search in a manner not inconsistent with the finding of opposing biophysical sources for the N2pc.

Thomas Palmeri

Vanderbilt University

Giwon Bahg

Vanderbilt University

Gregory E. Cox

University of Albany

Gordon D. Logan

Vanderbilt University

Jeffrey D. Schall

York University

Marcel Binz

Helmholtz Munich

marcel.binz@helmholtz-munich.de

Hypothesis testing in natural language Most theories in the behavioral sciences are still verbal. I’ll present a framework for turning such theories into quantitative predictions that can be evaluated against each other. The core of this framework relies on using large language models as prediction engines. I'll discuss advantages, disadvantages, and open questions coming with this framework. Finally, I'll present a case study of how it can be used to identify human preferences.

Marcel Binz

Helmholtz Munich

Bruno Bocanegra

Erasmus University Rotterdam

bocanegra@essb.eur.nl

Prompting LLMs to induce novel structure Over the past years, large language models (LLMs) have shown remarkable abilities to solve complex tasks, which, when executed by human participants, require higher-level problem-solving and reasoning abilities. At the same time, both 'pure' LLMs as well as more recent 'reasoning' models display surprising brittleness in these abilities as they are often led astray by superficial characteristics of the task, such as the specific vocabulary used to instantiate the problem. These striking failures can often be traced back to the distribution of patterns of tokens present in the models' original training data, whereby the specific task instantiation used a pattern of tokens that mismatched the problem. In our latest series of experiments, we have been assessing various LLMs on their abilities to induce novel structure in a simple problem-solving task that systematically varied the tokens used to instantiate the task. Our experimental results suggest that the full range of performance (from ceiling to floor) that current state-of-the-art LLMs display can be explained by assuming the models perform optimal Bayesian inference based on prior knowledge and the data provided to them in the prompt.

Bruno Bocanegra

Erasmus University Rotterdam

Jonathan Tullis

University of Arizona

tullis@arizona.edu

The Mind Reader’s Dilemma: How We Misjudge What Others Know Accurately estimating others’ knowledge is vital when navigating social environments. For example, teachers must anticipate their students’ understanding to plan lessons and communicate effectively. Yet, research consistently shows we have systematic biases in our estimates about what other people know. One’s own knowledge, for example, has been labeled a “curse” because it can bias estimates of what others know. In this talk, I will examine situational factors and individual differences that exacerbate or reduce biases in estimates about what others know. We will model social metacognitive predictions within a cue-utilization framework of social metacognition, in which predictions of others’ knowledge are dynamically generated by estimators who weigh available and salient cues. We argue that the availability of diagnostic cues and the failure to appropriately shift among relevant cues causes systematic impairments in predictions.

Jonathan Tullis

University of Arizona

Yaoping Peng

Hunnan Normal University

Patricia L Foster

Indiana University, Bloomington

plfoster@iu.edu

Bird Flu: Will it be the next pandemic? The highly pathogenic A(H5N1) avian influenza virus, first detected in wild birds in 1997, has since spread globally, carried by migratory birds. Starting in 2022 the virus was detected in US domestic poultry, resulting, to date, in the loss of nearly 200 million birds. In 2024 an unexpected jump spread the virus to domestic cows, followed by transmission to humans and other mammals. In the US to date there have been 70 confirmed cases of bird flu in humans and two deaths. In this lecture I will examine the differences between the human and bird flu virus, what makes the bird flu virus able to infect so many different species of animals, and the threat it poses to widespread human spread. I will discuss preventative measures, possible treatments, and whether effective vaccines are being developed.

Patricia L Foster

Indiana University, Bloomington

Fabien Mathy

Université Côte d'Azur

fabien.mathy@univ-cotedazur.fr

Reevaluating spatialization in working memory: Implications for models of mental scanning Spatialization in working memory refers to the mental organization of serial information on a horizontal axis, a phenomenon presumably shaped by reading and writing habits. In Western individuals, early items in a sequence are preferentially associated with the left side of a putative mental line, while later items align with the right, leading to response compatibility effects at recognition. The prevailing account likens working memory to a mental board, suggesting that serial order is grounded in spatial attention mechanisms. However, this framework implies both continuous and symmetric effects of item positions—patterns that are not consistently reflected in empirical data. Notably, response time differences between hands are largely driven by primacy and recency effects, with middle items contributing minimally to spatialization. Additionally, response asymmetries challenge the assumption that both hands equally contribute to response compatibility with the internal space. Also, although not yet fully understood, spatialization challenges sixty years of mental scanning studies, potentially explaining conflicting results and replication issues. Specifically, previous studies may have overlooked that response time variations as a function of serial position depend on response hand assignment, which has not always been clearly specified, as seen in some iterations of Sternberg’s paradigm. This work underscores the need for a more precise conceptualization of spatialization in working memory and its methodological implications for studying recognition in general in working memory.

Fabien Mathy

Université Côte d'Azur

Ed Awh

University of Chicago

awh@uchicago.edu

Content-independent indexing mediates capacity limits in visual working memory Although past neural studies of working memory (WM) have focused on stimulus-specific activity that tracks the stored feature values, a separate line of evidence has revealed neural signals that track the number of items in WM, independent of the contents of those items. Thus, a common neural signature of WM load can be identified for highly distinct visual features, and even across visual and auditory sensory modalities. Our working hypothesis is that these content-independent load signals reflect the operation of spatiotemporal “pointers” that enable the binding of items to the surrounding event context, and that contextual binding is a necessary component of WM storage. To test this hypothesis, we applied representational similarity analysis (RSA) to EEG data to determine the number of pointers deployed across set sizes that ranged from 1 to 8 items. The findings describe a “neural load” function that rises with increasing numbers of to-be-stored items, while controlling for differences in sensory energy and spatial attention. Critically, this function differed sharply as a function of individual differences in WM capacity. Subjects with higher capacity showed a monotonic rise in the number of pointers deployed that leveled off at higher set sizes. By contrast, low capacity subjects showed an initial rise followed by a sharp decline in the number of pointers deployed at higher set sizes. This empirical pattern dovetails with past behavioral and neural studies that have documented increased costs for low capacity observers as the number of memoranda exceeds capacity. We conclude that content-independent indexing is a core component of individual differences in WM ability.

Ed Awh

University of Chicago

Henry Jones

University of Chicago

Darius Suplica

University of Chicago

Stuart Smiley

5AM Solutions

stuart.a.smiley@gmail.com

How I learned to stop worrying and love the coding Science is hard. So why make it even harder on yourselves? In this talk, I will discuss one way to make science easier on yourself. Or in other words: how I learned to stop worrying and love the coding. Based on my over 25 years of experience as a software engineer and the research and work of others, we will discuss how we can use cognitive science and working memory to write better software. We will also discuss why this is important to science. As scientists, you all are experts in the design and development of meaningful experiments. Then you must follow up with analyzing the data collected. For many of you, this involves writing a significant amount of software, often in languages such as R and Python, for both data collection and data analysis. Writing good software (i.e., software that actually does what you think it should be doing) and reading someone else’s software are cognitively demanding tasks. Using the research of cognitive science, software engineers have developed strategies that reduce the amount of information we have to hold in working memory to write quality code. I will share these research-based strategies to illustrate how one can make code better, more reliable, and easier to understand, thereby improving the science it supports.

Stuart Smiley

5AM Solutions

Jennifer Lentz

Indiana University

jjlentz@iu.edu

The contributions of left and right ears in dichotic listening In this talk, I will discuss a novel application of cumulative hazard ratios based on reaction time to evaluate the relative roles of the left and right ears in dichotic listening. Experimental data in which listeners reacted to two different digits presented to the left and right ears will also be provided, as an illustration of the advantages of this efficiency metric. In the experiment, two different decision rules were used. In the OR condition, participants pressed one key when either digit was odd and another key otherwise. In the AND condition, they pressed one key when both digits were odd and another key otherwise. Young listeners with normal hearing demonstrated a greater reliance on the right ear, which tended to dominate the binaural percept, but the degree of dominance of the right ear differed between AND and OR conditions. This approach directly quantifies the contributions of the two ears than the more commonly used accuracy-based measures or differences in mean reaction time. I will discuss the potential applications of benefits of using this metric for assessing binaural abilities for individuals with asymmetrical hearing loss and rehabilitative strategies such as cochlear implants or hearing aids.

Jennifer Lentz

Indiana University

Ken Malmberg

USF

malmberg@usf.edu

The von Restorff Effect in Free Recall, Recognition, and Source Memory VON RESTORFF EFFECTS 2 Abstract Distinct items encountered in a sequence are better recalled than the less distinctive items (Von Restorff, 1933). The is often referred to as a von Restorff or an isolation effect. There is rarely an isolation effect for recognition, which is inconsistent with intuition and all known theories of memory. Three experiments extend prior findings to a multi-list procedure, confirming a free recall advantage for unexpected words, but no recognition advantage unless recall is tested before recognition. A somewhat ambiguous effect was observed when source memory was tested. Based on these results, we hypothesized that the lack of a von Restorff effect for recognition is due to constraints on traditional designs used to study isolation effects and perhaps uncontrolled factors during testing. In three additional experiments targeted specifically at observing isolation effects in recognition and source memory, we obtained more observations per subject in the critical condition, reducing measurement error, and the order in which items in recognition and source memory were tested was controlled, and the results revealed von Restorff effects for both recognition and source memory. Implications for models of memory and attention are discussed.

Ken Malmberg

USF

Siri-Maria Kamp

Trier University

Bertrand Thirion

Inria

bertrand.thirion@inria.fr

Brain functional alignment in the age of generative AI Anatomical and functional inter-individual variability poses a significant challenge to group analysis in neuroimaging studies. While anatomical templates help mitigate morphological differences by coregistering subjects in fMRI, they fail to account for functional variability, often leading to blurred activation patterns on the template due to group-level averaging. To address this problem, hyperalignment identifies fine-grained correspondences between functional brain maps of different subjects, with Procrustes analysis and optimal transport being among the most effective approaches. However, many hyperalignment based imaging studies rely on the selection of a single target subject as the reference to which all other subjects' data are aligned; the introduction of functional templates eliminates the need for this arbitrary selection, effectively encapsulating population similarities while preserving anatomical coherence. In this talk, we first review the strongest brain alignment techniques and the resulting template estimation procedures. We discuss the validation of such templates through decoding experiments, both in standard classification settings and using recent generative decoding approaches, where functional alignment integrates well with deep phenotyping approaches. We present preliminary results on the impact of functional alignment on cross-species brain data analysis.

Bertrand Thirion

Inria

Michelle Awh

University of Pennsylvania

michelle.awh1@gmail.com

A mouse model of binge eating increases habitual behavior Binge eating is characterized by episodes of uncontrollable food consumption, independent of physiological need. A key part of this phenotype is the extension of habitual tendencies to other domains. Habitual behavior can persist independently of the link between action and reward, while goal-directed behavior is flexible and sensitive to changes in this relationship. There is strong motivation to develop animal models of binge eating to enable direct study of the neural circuits underlying this behavior. We show that when mice are given daily restricted access to a palatable high-fat diet (RA HFD), they consume over half of their daily caloric intake within just two hours, a concentrated intake resembling binge eating. We assess goal-directed behavior using an operant task in which mice learn to press a lever for a food reward. Subsequently, the reward is made available regardless of lever pressing. Mice with a history of RA HFD continue pressing despite the weakened action-outcome contingency, suggesting a shift from goal-directed control towards habitual behavior.

Michelle Awh

University of Pennsylvania

Nicholas K Smith

University of Pennsylvania

J Nicholas Betley

University of Pennsylvania

Bruno Nicenboim

Department of Cognitive Science & Artificial Intelligence, Tilburg University

b.nicenboim@tilburguniversity.edu

Reading as a continuous flow of information Traditional models of reading often assume a series of discrete stages in which each level of representation (e.g., orthography, lexical access) is processed separately. Because reading times are too short to accommodate fully serial processing, existing models typically leverage parafoveal preview—partial processing of upcoming words before direct fixation—and posit that some stages operate in parallel (as in E-Z Reader and SWIFT). However, these models focus largely on eye-movement control and struggle to incorporate stages that deal with the higher-order processes supporting comprehension. I propose the CoFI (Continuous Flow of Information) Reader, which replaces strictly sequential stages with a dynamic system of concurrent processing to explain how humans rapidly comprehend text. To demonstrate its ability to capture reading behavior, I implement CoFI Reader as a hierarchical Bayesian model and fit it to self-paced reading data. In this modeling framework, partially completed outputs from lower-level processes continuously feed into higher-level representations, while the final layer inhibits a stochastic timer. Once the timer threshold is reached, the reader is ready to read the next word. Crucially, earlier words continue to influence ongoing linguistic processing even after the reader has moved on. Although the experimental paradigm that is used as reading data precludes parafoveal preview, CoFI Reader successfully replicates both the short latencies and the systematic spillover effects observed in empirical studies—findings that require parafoveal preview. This talk will describe key aspects of the model’s architecture and parameterization, highlight its empirical fit to real-world data, and discuss the broader implications for theories of reading.

Bruno Nicenboim

Department of Cognitive Science & Artificial Intelligence, Tilburg University

Mathieu Servant

Université Marie et Louis Pasteur (France)

mathieu.servant@univ-fcomte.fr

Deciding with muscles Goal-directed behavior requires a translation of our choices into actions through muscle contractions. Research in cognitive psychology and neuroscience has heavily focused on the decision process during the past sixty years, but the articulation between decision and motor systems remains poorly understood. Progressing on this issue is important for several reasons. From the perspective of decision sciences, it is unclear if the motor system simply executes our choices or actively contributes to deliberation. From the perspective of motor sciences, understanding movements requires an understanding of neural inputs to muscles and upstream processes that determine them. Finally, several psychiatric and neurological disorders appear to affect both decision and motor systems, as suggested by apathy and impulse control disorders, psychomotor retardation symptoms, and paradoxical movements, among others. A precise understanding of these alterations and the development of efficient therapeutic strategies requires an integrated theory of decision and motor processes. In the first part of this talk, I will show that the electrical activity of muscles involved in the response during choice laboratory tasks is strikingly consistent with the process of evidence accumulation assumed by sequential sampling decision models. I will then introduce a theoretical framework linking decision and motor processes, the gated cascade diffusion model, which provides a good quantitative account of both behavioral and muscular data.

Mathieu Servant

Université Marie et Louis Pasteur (France)

Nathan J. Evans

University of Liverpool (England)

Gordon D. Logan

Vanderbilt University (USA)

Thibault Gajdos

Aix-Marseille Université (France)

Christoph Huber-Huber

University of Trento, Italy

christoph@huber-huber.at

Active visual perception and the role of prediction across eye movements In the real world, our eyes move about three to four times per second in a quick and jerky way which divides the seemingly continuous subjective visual experience of the world around us in perpetually changing discrete snapshots. Surprisingly, we are not aware of these particular spatiotemporal dynamics of active visual perception. It has been suggested that the reason for this unawareness is that our brains process sensory information in a predictive way. In this talk, I will present EEG/MEG and eye-tracking coregistration studies that aim at better understanding these predictive aspects of active visual perception. I will arrive at the conclusion that active vision really changes how we see and I will end with some considerations about the neural basis for the temporal structure of active gaze behavior. In sum, this talk will highlight the importance of considering active vision in the neurocognitive study of visual perception.

Christoph Huber-Huber

University of Trento, Italy

Kimele Persaud

Rutgers University - Newark

kimele.persaud@rutgers.edu

Bayesian Models of Memory for Expectation-Congruent and Incongruent Items across Development Bayesian models that assume recall is a mixture of prior knowledge and noisy memory traces have been used to explain memory in children and adults (Persaud et al, 2016; 2021). Yet two challenges to this modeling framework for understanding expectations and memory more broadly, persists: 1) these models have overwhelmingly been applied to recall of items that are congruent or unrelated to people’s prior expectations, and 2) current models fail to capture other factors that influence memory in development, such as attention and inhibitory control. In this talk, we first present empirical data and modeling from past work assessing the influence of prior knowledge on recall of congruent items (i.e., color recall) across development (Persaud, et al., 2021). Based on the limitations of this work, we then present simulations from a new model, akin to Hemmer & Steyvers, 2009, that captures both the impact of executive functions and expectation-incongruence on memory in children and adults.

Kimele Persaud

Rutgers University - Newark

Carla Macias

Rutgers University - Newark

Fenna Poletiek

Leiden University / Max Planck Institute for Psycholinhuistics, Nijmegen, Netherlands

poletiek@fsw.leidenuniv.nl

Mechanisms of serial recall can explain the preponderance of center embedded syntactic structures A defining characteristic of human language is hierarchical recursion. Recursive loops (e.g. relative clauses) in sentences can either be embedded in the center of a sentence or cross each other. It is still unknown why in Indo-European languages the possibility for center-embedded (CE) recursion seems ubiquitous as in The boy A1 the dog A2 chases B2 falls B1 (A1A2B2B1), whereas crossed-dependent (CD) orderings of recursion hardly ever occur (A1A2B1B2). In both structures, serially encoded words (e.g. boy and dog) must be retrieved and bound to later upcoming words (chases and falls). The exceptional rarity of CD as compared to CE grammars is surprising considering that the latter produce dependent elements at longer distances than the former. We propose that the preponderance of CE can be explained by item retention and retrieval mechanisms of serial recall combined with word binding operations ( e.g., word A is Subject Noun of word B Verb) specific for language comprehension. Our account explains that backward retrieval (retrieving dog(A2) first, and boy(A1) next, as in CE) optimizes memory performance as compared to forward retrieval, as in CD. We design two Retrieval and Binding Performance (RBP) functions, for CE and CD, and show by numeric comparison that RBP for CE is larger than for CD for a given sentence. Moreover, independent serial recall data support this difference in efficacy between the two strategies, under conditions that mimic sentence processing. We propose that CE is better molded to human memory than CD, which might explain why CE has prevailed during language evolution.

Fenna Poletiek

Leiden University , NL

Peter Hagoort

MPI for Psycholinguistics, Nijmegen, Netherlands

Bruno Bocanegra

Erasmus University, Rotterdam, Netherlands

Philippe Colo

University of Bern & ETH Zürich

colo.philippe@gmail.com

Ethical Equilibrium Concepts in Game Theory Equilibria are the most central concept in game theory. For a given strategic interaction between players, equilibrium concepts determine how their preferences and beliefs will balance each other out. Among existing equilibrium concepts, the Nash equilibrium has an hegemonic and rarely questioned position. While empirically successful, the logic it depicts often leads to normatively questionable outcomes. Scholars have shown that equilibrium concepts in games are equivalent to beliefs players hold regarding the game they are playing and how other players take decisions. I will call these beliefs behavioural beliefs (bbeliefs). In this paper I reflect on the nature and normativity of bbeliefs. I will argue that we can have positive deontological influence on bbeliefs and ought to do so. This will equate to adopting multiple bbeliefs and searching for evidence regarding those held by our coplayers, an attitude that equates to a form of a priori suspension of judgement on bbeliefs.

Philippe Colo

Geoff Woodman

Vanderbilt University

geoff.woodman@vanderbilt.edu

Models of long-term memory already produce key signatures of visual working memory The study of working memory and long-term memory diverged decades ago resulting in views about the diagnostic nature of certain measures that might not be valid. Specifically, it is viewed as critical to show a capacity limit to demonstrate that we are measuring visual working memory storage and not the unlimited capacity visual long-term memory store. Although sound logically, it remains to be seen whether these hallmarks of working memory might already fall out of existing models of long-term memory. Here we show that the precision and set size effects that we visual working memory researchers often use to validate our assumption that we are studying visual working memory are also observed as a natural result of the dynamics of contextual models of long-term memory storage and retrieval. Our findings motivate re-examining unified models of human memory, paired with multi-modal empirical studies that target key questions about the nature of visual working memory and long-term memory.

Sean Polyn

Vanderbilt University

Geoff Woodman

Vanderbilt University

Richard Shiffrin

Indiana University

shiffrin@iu.edu

On the fundamental processes of recognition memory Three long-term recognition memory studies were carried out to explore the basic processes of recognition memory and the ability of current models to generalize to new settings. Lists of 12 or 24 words, 12 or 24 pictures, or a mixed list of 12 words and 12 pictures were studied. Short-term memory was cleared after study and prior to test. In Experiment 1 memory was tested with two items: both or neither from the list, or one from the list. In 2AFC blocks the more likely old item was to be chosen; in 4WC blocks the two items were to be classified as both old, both new, or left old and right new, or the reverse. Experiment 2 was identical except participants were told to select the more likely new item. Experiment 3a had blocks of single item old-new testing and 2AFC testing; experiment 3b had blocks of single item old-new testing and 4WC testing. The 288 distinct conditions giving probabilities of correct and error responses were predicted very accurately by the simple REM model of Shif

Zainab Mohammed

Indiana University

Constantin Meyer-Grant

University of Freiberg

Richard Shiffrin

Indiana University

Ghislain Fourny

EH Zurich

ghislain.fourny@inf.ethz.ch

Progress report on an extension theory of quantum physics There are two assumptions behind Bell inequalities, which are broken by nature: free choice and locality. This leaves the door open to proving that Einstein was right about the incompleteness of quantum theory if we accept to weaken the free choice assumption. In this talk, I will give a status update on our recent progress in building this non-Nashian theory of physics, including an AI-based approach for determining underlying formulas based on observed correlations.

Ghislain Fourny

ETH Zurich

Sky Jiawen Liu

Cardiff University

skyliu665@gmail.com

Which violent offense is more serious? Spiking, setting fire or choking? In this project, we examine how lay individuals assess the severity of violent offenses. Previous literature suggests that the ranking of violent offenses in terms of severity tends to be more consistent across different respondent groups and scenarios compared to numerical ratings of seriousness. However, using a pairwise ordering task, we found that the perceived severity of violent offenses varies depending on the context in which the offense occurs.

Sky Jiawen Liu

Cardiff University

Zainab Mohamed

IU Bloomington

Zrmohame@iu.edu

On the fundamental processes of recognition memory Three long-term recognition memory studies were carried out to explore the basic processes of recognition memory and the ability of current models to generalize to new settings. The experiments varied list length, stimulus type and testing format within and between participants. Response probabilities were predicted accurately by the REM model of Shiffrin & Steyvers (1997). The model ability to generalize and account for data from such a large number of conditions is surprising because REM is missing many components known to play an important role in recognition memory. This suggests that the few components it does include are fundamental and important enough to produce a good approximation to most results from recognition memory studies. These processes include incomplete or error-prone storage, comparing each test probe to memory traces, and computing a likelihood ratio that the trace is old, based on which an old/new recognition decision is made.

Zainab Rajab Mohamed

IU Bloomington

Constantin G. Meyer-Grant

University of Freiberg

Richard M. Shiffrin

IU Bloomington

Mark Steyvers

University of California, Irvine

mark.steyvers@uci.edu

Communicating Uncertainty with Large Language Models Large language models (LLMs) play a growing role in decision-making, yet their ability to convey and interpret uncertainty remains a challenge. We examine two key issues: (1) how LLMs interpret verbal uncertainty expressions compared to human perception and (2) how discrepancies between LLMs’ internal confidence and their explanations create a disconnect between what users think the model knows and what it actually knows. We identify a calibration gap, where users overestimate LLM accuracy, and a discrimination gap, where explanations fail to help users distinguish correct from incorrect answers. Longer explanations further inflate user confidence without improving accuracy. By aligning LLM explanations with internal confidence, we show that both gaps can be reduced, improving trust calibration and decision-making.

Mark Steyvers

University of California, Irvine

Olgun Sadik

Indiana University, Intelligent Systems Engineering

olsadik@iu.edu

Exploring Student Interactions with Generative AI in Engineering Education There has been considerable contemporary interest in using generative AI tools for teaching and learning in K-16 education. Recent research has provided evidence of their effective use in enhancing instructor productivity. However, understanding how students utilize these tools is crucial for educators, especially given concerns related to ethical use. While some instructors have developed policies to limit student reliance on these tools, such measures are not sustainable, as students will inevitably engage with them.To understand student usage and provide a guiding framework, this study employs discourse analysis to explore how students interact with a generative AI in a software systems engineering course. Students were asked to share their conversations with the AI tool, along with reflections on their experiences.The interactions were analyzed to examine both the structural and functional aspects of language use. Additionally, the reflections were analyzed using thematic analysis.

Olgun Sadik

Indiana University, Intelligent Systems Engineering

Nada Aggadi

Indiana University Bloomington

naggadi@iu.edu

Measuring Factors Associated with Identification Thresholds in Fingerprint Analysts The goal of this research is to measure and identify factors associated with identification thresholds in fingerprint analysts. Conclusions reached in friction ridge comparisons require the application of an individual threshold. While previous studies have investigated the mechanisms behind value determinations, or on the perceived stress reported by forensic examiners, only a few have focused on the influence of personality traits and environmental factors on identification thresholds. This study measures how individual traits and workplace policies may shape these thresholds. Participants were presented with latent prints and tasked with making value determinations before conducting a latent print comparison. Post-trial, participants responded to a series of survey inquiries focusing on their personalities and interactions within the work environment. Our results demonstrate a significant positive correlation between NFC score and the number of Identification decisions as well as a

Nada Aggadi

Indiana University Bloomington

Tom Busey

Indiana University Bloomington

Meredith Coon

AVER, LLC
Contact reberle@indiana.edu with questions.