Authors, Titles, Abstracts, Presentations
IMPORTANT:
The ASIC speakers and attendees, whether world famous scientists or graduate students, expect to hear, and are used to hearing state-of-the-art leading-edge research. However: ASIC is an interdisciplinary conference and always has a diverse audience, Thus DO NOT give a talk aimed at your co-authors, laboratory colleagues, or even experts in your research domain: GIVE A TALK ACCESSIBLE TO AND UNDERSTANDABLE BY THE DIVERSE ASIC ATTENDEES.
Please email kmanalo@iu.edu if you need to make edits to your submission.
List of Submissions
Speaker | Title | Abstract | Author(s) |
---|---|---|---|
Zhong-Lin Lu NYU Shanghai & NYU |
mrHBM: Multi-Resolution Hierarchical Bayesian Modeling for High-Resolution Behavioral Analysis | This talk presents a pyramid-based multi-resolution Bayesian framework to estimate high-resolution latent variables from sparse behavioral data, addressing the limitations of aggregated data in capturing fine-grained dynamics across domains like education, perceptual learning, and clinical vision, where sparse data and large covariance matrices challenge traditional hierarchical models like HBMc. It constructs a parameter pyramid with a top-layer HBM incorporating covariance for coarse trends, iteratively refining estimates at finer resolutions using a difference pyramid, supported by formal proofs for 1D and 2D cases, and employs three Bayesian variants—BIP, HBMv, and HBMc—with HBMc enhancing precision via hierarchical pooling. Implemented in PyMC for scalability, it avoids the computational burden of large covariance matrices. Tested across four experiments with eight models (BIP, HBMv, and six pyramid variants) on datasets including 36 Gaussian time series, perceptual learning from 49 subjects, 36 perimetry sets, and CSF measurements from 36 subjects in 24 visits, the framework demonstrated superior performance. The HBMc-HBMv model consistently outperformed others (WAIC weight = 1.0), achieving the lowest RMSE (0.0330–0.1115) and SD (0.0212–0.0941) across experiments, reducing errors by up to 74.1% and variability by up to 78.5% compared to BIP (WAIC weight = 0.0), which showed the highest errors (e.g., RMSE: 0.1666–0.4013 in Experiment 1). Compared to wavelet analysis, the method prioritizes discrete behavioral inference with a focus on capturing detailed patterns, while in contrast to Gaussian processes, it provides a reduced computational cost and enhanced interpretability, opening up potential applications across a wide range of fields including learning, healthcare, finance, and beyond. | Zhong-Lin Lu NYU Shanghai & NYU |
Madeleine Ransom University of British Columbia |
To what extent can empirical methods reveal the contents of perception? | Do we represent only low-level properties such as shape, color, and motion in perceptual experience, or do we also represent high-level properties such as natural and artifactual kind properties, aesthetic, and moral properties? Recent philosophical debate on this question has increasingly centered on a method long employed by psychologists to disentangle perceptual from cognitive representation. The common mechanism method appeals to empirical evidence that a given perceptual effect common to low-level properties is also present with respect to certain high-level properties, and posits that the best explanation for this is that high-level properties are also represented in perceptual experience. Here I characterize this method and evaluate its prospects for advancing psychological and philosophical debate, defending it from five objections: the empirical results can only be interpreted in light of background theory; the effect appealed to is not exclusive to the perceptual system; mechanisms cannot be individuated without appealing to background theory; the effect can be explained in terms of low-level properties only; and conflicting empirical results should cause us to mistrust the strategy entirely. In the course of discussion I highlight places where either philosophical, empirical, or cross-disciplinary work is required. | Madeleine Ransom University of British Columbia |
Martial Mermillod University Grenoble Alpes |
The Role of Top-Down Connections in Typical Cognition, Autism, and Artificial Intelligence | Unlike standard artificial neural networks like Convolutional Neural Networks (CNN) or Transformers, the human brain heavily relies on top-down synaptic connections in both low-level visual cognition (e.g., recognition, categorization) and complex socio-emotional processing (e.g., affective evaluation, biases). This talk will (1) review theoretical models underlying these cognitive, emotional, and social processes, (2) present experimental evidence supporting the role of top-down modulation in typical individuals, (3) explore applications in the field of autism spectrum disorder (ASD), and (4) extend these discoveries to AI. Concerning this issue, recent advances in artificial intelligence (AI) have demonstrated the critical role of top-down mechanisms in predictive processing, particularly in time series anticipation. These studies suggest that hierarchical predictive coding—where higher-level expectations shape lower-level perception—enhances the brain’s ability to anticipate sensory input efficiently. In ASD, atypical top-down processing may lead to a stronger reliance on bottom-up integration, potentially contributing to difficulties in high-level information processing and socio-emotional functioning. Investigating these mechanisms could provide novel insights into both neurocognitive models and AI architectures designed for predictive tasks. | Martial Mermillod University Grenoble Alpes |
Tim Pleskac Indiana University |
Modeling the dynamics of subjective probability judgments | Subjective probabilities (SPs) often violate description invariance where different descriptions of the same event can result in different SPs. This violation is often explained by the hypothesis that SPs are based on the support people accumulate from an event’s description rather than the event itself. As a result, different descriptions can elicit vastly different SPs. However, extant models of support are static, failing to capture how beliefs dynamically evolve. We address this limitation by adapting Decision Field Theory to model SPs. Our proposed Judgment Field Theory (JFT) leverages a similar evidence accumulation framework, modeling how forecasters iteratively construct beliefs about a hypothesis. A key insight of JFT is that different belief states correspond to different SPs, allowing forecasters to report an SP at any moment or continue accumulating support. We tested JFT using two rich datasets in which basketball enthusiasts (N=125, N=364) provided probabilistic forecasts of NCAA men’s basketball end-of-season rankings. The model successfully predicted both SPs and response times. Notably, the data revealed that participants exhibited classic context effects (attraction, similarity, and compromise), patterns that challenge many prevailing SP theories. At the same time, the results exposed a key challenge for JFT (and many other evidence accumulation models): accurately modeling SPs required accounting for the participant’s subjective representations of teams. This work advances the modeling of probability judgments by introducing a dynamic framework that accommodates belief evolution, offering a novel approach for understanding how SPs develop over time. | Tim Pleskac Indiana UniversityJun Fang Indiana UniversityApramay Mishra Indiana UniversityJames Adaryukov Indiana UniversityXiaohong Cai Indiana University |
Ven Popov University of Zurich |
A Tale of Two Beasts: Reflections on the first ascent of La-sum (6045m) | In October 2022 with a team of 3 friends we did the first ascent of a previously unnamed 6045 meters high peak in ghe Changla Himal region of Northwestern Nepal. The many health and logistical challenges of the month-long expedition and the first ascent made me reconsider why I climb, what adventure means to me, and what I want my life to be. | Ven Popov |
Maciej Hanczakowski Adam Mickiewicz University |
Memory processing, memory performance, and metacognition. | In a typical memory experiment, a certain variable is manipulated and its effects on performance in a given memory test are assessed. Finding a significant effect on performance convinces the researcher that memory processes were affected, while finding no effect often leads to a conclusion – sometimes supported with some fancy statistics – that no change in memory processes occurred as a result of applying that particular experimental manipulation. In this talk, we discuss a host of findings showing that a manipulation of context reinstatement – re-encountering in a memory task the same combination of item and context that was present at item encoding – can yield a change in memory processes that nevertheless fails to affect memory performance. We also show that this change in memory processes is then picked up by metacognitive measures such as postdictions of accuracy in a memory test or predictions of performance in future memory tests. We conclude that interpreting null results from memory experiments is inherently tricky and that a proper investigation of changes in memory processes requires multiple measures instead of concentrating solely of indices of performance. | Maciej Hanczakowski Adam Mickiewicz UniversityKatarzyna Zawadzka Adam Mickiewicz University |
Maël Delem University of Lyon |
The Shape of Thought: EEG Topological Invariants of Working Memory Representations in Aphantasia | Aphantasia refers to the inability to form visual mental images. Several studies have shown that aphantasics perform as well as controls in visual working memory (WM) tasks. However, visual WM is often theorized as requiring the formation and maintenance of mental images. Neuroimaging work has shown similar representations between perception, imagery and visual WM. The present study aims to understand the nature of the neural representations of aphantasics in visual WM tasks and how they differ from those of control participants. We propose a continuous colour recall paradigm, in which we will use electroencephalography (EEG) to study the neural dynamics of colour representation during perception, encoding, and maintenance in WM. We will use Representational Similarity Analysis (RSA) to enable the comparison of data of different natures and from different participants. In addition, we propose to combine RSA with Topological Data Analysis, a set of mathematical tools for studying the “shape” of data and extracting stable characteristics. This methodological scheme aims to objectively characterize individual differences in mental imagery to clarify the forms of WM representation in aphantasia. | Maël Delem University of LyonArnaud Fournel University of LyonRémi Vaucher University of LyonGaën Plancher University of Lyon |
David Landy Netflix and Indiana University |
The role of Induced Sensory Experiences in Advanced Mathematics | Cognitive science has a long tradition of studying the range of the possible mechanisms of thought: the different ways that training and life experience can bend and form these basic mechanisms into quite dramatically distinct forms. The drive to understand what is cognitively possible also leads inquiry into the boundaries of human cognition: the rare and distinctive cognitive processes which enable small groups of people to be able to think in ways which are unavailable to most others. This project I will report on focuses on a class of cognitive mechanisms which we think are (1) rare, (2) trainable, and (3) singularly effective in certain kinds of problem solving: direct sensory experiences of non-physical entities. This may seem unusual, but there is indeed a surface appearance that people can, through a variety of means, have something at least close to direct experiences with things that are not literally present. Clear examples come from psychedelic drugs, but people from various religious traditions and advanced mathematical practice likewise report powerful, sensory experiences of non-physical objects and situations. Unlike psychedelic experiences, these mathematical and religious practices regularly induce the unseen through long training, effort, and practice--practices that are regimented, regular, and taught. I will report on a project studying the distinct non-physical experiences of Ph.D. level mathematicians, and discuss the potential cognitive mechanisms that may subserve them and the cultural practices shared with religious and spiritual communities that may inculcate them. | David Landy Netflix and Indiana UniversityEleanor Schille-Hudson Stanford University |
Marco Zorzi University of Padova |
Learning to represent visual numerosity: Insights from deep generative models | I will discuss the ability of both large-scale and small-scale generative models to learn to represent visual numerosity without explicit supervision. I will show that visual enumeration of object sets is challenging for large-scale multimodal foundation models, with no model showing exact counting and only state-of-the-art proprietary models broadly following the pattern of human numerosity estimation. Our analyses of the vision-language transformer (ViLT) model reveal that inadequate performance largely stems from a misalignment between vision and language embeddings, broadly known as “modality gap”. I will also discuss recent simulations with small-scale neural network models designed to investigate how the quality of visual numerosity representations and the ability to capture the psychophysics of human numerosity perception is modulated by the distributional properties of natural image datasets and by intrinsic (i.e., architectural) limits in image encoding capacity. | Marco Zorzi University of PadovaAlberto Testolin University of Padova |
Richard M Shiffrin Indiana University |
Inhibition is not a cause of forgetting | Retrieval-Induced Forgetting (RIF) is defined as reduced recall of some studied item when one is cued to recall some other studied item. Inhibition of the retrieved trace has been proposed as a cause: E.g. Anderson, Bjork and Bjork hypothesized that a first letter cue for recall of word starting with that letter might sometimes produce retrieval of another word not starting with that letter, and that the retrieved trace would be inhibited (degraded). Other theories suggest the opposite, that a wrongly retrieved trace would be stored and/or strengthened. It is not known if such incorrect retrievals occur often enough to produce either inhibition or the opposite. The present studies included conditions with non-diagnostic picture primes that would induce retrieval of the wrong word. All conditions compared final recall of the studied word that might have been inhibited (or the opposite) due to cued testing of a different word, with final recall of a word studied after the testing: A trace can only be inhibited if it exists, so conditions posited to produce inhibition should increase the difference between recall of early words vs late words, whereas storage or strengthening should decrease the difference. Whichever is the case, picture priming should increase the effect. Every test in two experiments produced the opposite of inhibition: Incorrect words that are retrieved are better recalled later. We model the results with well-established memory processes. We suggest further that RIF found in other studies can be explained by well-established memory processes, namely competition and context change. (April 13) | Anirudh Doppalapudi Indiana UniversityRichard Shiffrin Indiana University |
Jay Holden University of Cincinnati |
Behavioral Dynamics of Temporal Estimation Perfomance | Generic temporal estimation (TE) tasks require participants to “become a clock” by repeatedly estimating a specific interval of time, such as ¾ sec. This project investigates how changes in the timescale of information available from task demands, perceptual, and physiological sources impact temporal estimation performance. A generic TE protocol uses a single-button mode of responding. Here the generic protocol is retooled with two-button responding, that alternates between participants’ right and left fingers. This allows for controlled studies that contrast performance between individuals (solo) and distributed two-person teams (dyads). Other manipulated variables control the timescales available for the exchange of information between the responding limbs. For example, a trial-blocking factor periodically cues one or the other button across a range of timescales from every other trial up to blocks of 32 trials. Runs of the same response cues impede between-limb perceptual information exchanges on timescales shorter than the blocking period. Blocking is predicted to hamper dyadic performance more than solo performance because solo performance may rely on intrinsic physiological information exchanges, unavailable to dyads, to compensate. Similarly, a sinusoidal inter-trial interval variation factor allows for the quantification of between-limb coordination strength across timescales, a proxy for information exchange. The goal is to compare solo and dyadic performances to delineate the informational resources supporting fractal scaling in temporal estimation performance, as a function of timescale. | Jay Holden University of CincinnatiNeil Swain University of Cincinnati |
Johannes Ziegler Aix-Marseille University and CNRS |
Strict Phonics Beats Mixed Phonics: Improving Reading Acquisition and Reducing Inequality | Reading acquisition in alphabet ic writing systems relies on learning grapheme-phoneme correspondences, yet the optimal method for teaching these mappings remains debated. While phonics-based instruction is widely supported, the relative efficacy of strict versus mixed phonics approaches has not been empirically established. Here, we report findings from a large-scale study in France, analyzing responses from over 9,000 first-grade teachers and the reading outcomes of 140,000 students. Our results show that strict phonics methods lead to significantly better reading performance than mixed methods, with the greatest benefits for struggling readers and socioeconomically disadvantaged students. These findings, obtained in a language of intermediate orthographic depth, suggest that rigorous phonics instruction may improve literacy outcomes across most alphabetic writing systems, with implications for educational policy, teacher training, prevention of reading difficulties, and social equality. | Johannes Ziegler Aix-Marseille University and CNRSPaul Gioia AMPIRIC, Aix-Marseille UniversityJérôme Deauvieau École Normale Supérieure (ENS), Paris Science & Lettres (PSL) |
Zhong-Lin Lu NYU Shanghai & NYU |
Multi-Resolution Bayesian Framework for High-Resolution Behavioral Analysis | This talk presents a novel multi-resolution Bayesian framework for analyzing sparse behavioral data, designed to overcome the challenges of capturing detailed dynamics in human behavior across various domains. The method uses a hierarchical structure to model coarse and fine-scale trends, leveraging Bayesian inference to improve precision and scalability. Implemented with computational efficiency in mind, it avoids the limitations of traditional approaches by reducing the burden of large-scale data processing. The framework was evaluated through experiments on diverse datasets, demonstrating its effectiveness in providing high-resolution insights compared to existing methods. It also offers theoretical connections to other multi-resolution techniques while emphasizing interpretability for structured behavioral data. With potential applications in fields like education, healthcare, and beyond, this approach paves the way for deeper understanding of human performance and opportunities for further methodological advancements. | Zhong-Lin Lu NYU Shanghai & NYU |
Christopher J. MacLellan Georgia Institute of Technology |
What is a symbolic system that we may know it, and why should we care? | There has been growing interest in the development of "neuro-symbolic" artificial intelligence systems that synergistically combine and integrate both neural with symbolic representations and processing. However, we argue there does not exist a clear definition that captures what researchers colloquially mean when they label a system as symbolic. Under the broad definition introduced by Newell and Simon (1972), neural systems would also likely also be characterized as physical symbol systems since they too consist of patterns of entities (neurons / symbols) that occur as components of expressions (symbol structures). What then do people mean when they label a system as "symbolic" and juxtaposition this label against "neural"? In this talk, I present a pair of concept formation models that intentionally blur the distinction between neural and symbolic and explore what it means for a system to be symbolic and why this may be desirable. I will introduce several characteristics that we hypothesize may make a system more "symbolic"—we hypothesize symbolic systems are sparse, localized, bounded in both representation and processing. Using examples from our concept formation models, we demonstrate how these properties may translate into more rapid, stable, and efficient learning and processing. | Christopher J. MacLellan Georgia TechPat Langley GTRI / ISLEKyle Moore VanderbiltDanny Weitekamp Georgia TechJesse Roberts Tennesse TechDoug Fisher Vanderbilt |
Shelley Xiuli Tong The University of Hong Kong |
Abstract Over Item-Specific Information: Statistical Learning Optimizes Memory Representations | Statistical learning optimizes limited memory resources by abstracting relations among specific items. However, the mechanisms underlying the representations of abstract and item-specific information remain unclear. This study developed a learning-memory representation paradigm in which three groups of participants, i.e., control (Experiment 1), item-specific encoding (Experiment 2), and abstract encoding (Experiment 3), were presented with a series of picture-artificial character pairs containing both abstract semantic categories at high (100%), moderate (66.7%), and low (33.3%) probability levels and item-specific information. In all experiments, participants performed an online visual search task that simultaneously assessed statistical learning and memory representation by examining how distractors containing abstract or item-specific information influenced the searching speed for artificial characters. Participants spent more time searching among abstract distractors than item-specific ones in the control but not item-specific encoding condition, indicating that, by default, working memory prioritizes abstract information. In contrast, in the abstract encoding condition, statistical learning effect varied across probability levels, with enhanced prioritization of abstract information for moderate and low compared to high probability items. These findings suggest that statistical learning is central to the abstraction process, with cognitive encoding strategies and input probabilities influencing the formation of abstract and item-specific representations through a flexible working memory prioritization system, particularly for uncertain inputs. | Shelley Xiuli Tong The University of Hong KongMei Zhou The University of Hong Kong |
Ken Malmberg USF |
A Convergence of Theories of Mind and Brain | Several models of cognition were developed by different researchers at different levels of account (brain versus mind). While useful insofar as they account for a wide swath of empirical findings, they are often viewed separately, perhaps even in an adversarial sense. Yet, we show them to be formally nested within a more general mathematical framework which is partially derived from the Hodgkin-Huxley models. Mathematical analysis of this framework indicates these equivalent systems are a coherent family of models that describe the mind, on the one hand, and brain, on the other, providing statistically accurate descriptions and predictions of observed data. On this fact, our findings support a parallelism or monism between mind and brain. We conjecture that a key link in this formal equivalence is Luce’s Existence Theorem, suggesting that different mind/brain phenomena arise from a competitive system in which behaviors are stochastically generated. | Chad Dube USFKenneth J. Malmberg USF |
Vladimir Sloutsky Ohio State University |
How Cognitive Immaturity Supports Cognitive Development | I this talk, I will argue that cognitive immaturities have not only costs but also substantial benefits. I will specifically consider benefits that stem from the tendency to explore and sample information broadly. Several mechanisms that may underlie this broad information sampling have been proposed, including inhibition failure, noisy processing, strategic information-seeking behaviors. Given that these explanation face substantial empirical challenges, I will propose a different possibility: These broad sampling and exploration behaviors are direct consequence of the immaturity of Working Memory-Selective Attention System. After reviewing this possibility, will present evidence testing it. | Vladimir Sloutsky Ohio State University |
Gaën Plancher University of Lyon |
Non-Visual Working Memory Strategies in Aphantasia: An Objective Assessment | Aphantasia—the inability to generate visual mental images—presents a paradox: individuals with aphantasia perform well on visual working memory tasks previously thought to require imagery. Literature suggests aphantasics employ alternative spatial or verbal strategies, yet without objective validation. We developed an online working memory task for a large-scale study, for which data are currently being collected. Participants memorized items with three features: colour (visual), orientation (spatial), and word (verbal). Our independent variable was the presence/absence of aphantasia (based on imagery questionnaires); dependent variables included accuracy scores for each feature type and prioritization patterns. The task's difficulty forces strategic prioritization of features to maximize performance. We hypothesize aphantasics will show lower colour recall accuracy while compensating through enhanced word/orientation memory. If feature preference patterns correlate with subjective imagery assessments, this paradigm would offer an objective measure of representational strategies. Beyond methodology, this approach could clarify cognitive mechanisms underlying memory processes in aphantasia and across the visual imagery spectrum. | Gaën Plancher Charline Favier Maël Delem |
Robert Goldstone Indiana University |
Multi-scale modularity in networks of agents that adapt to increase their relevance | Many biological, social, and economic systems evolve over time to exhibit modularity at multiple scales. Bodies are made up of organs, and an individual organ like the brain is made up regions that are differentiated from each other both cytoarchitecturally and functionally. At the social level, universities and companies are composed out of departments with specialized functions, and areas of specialization with departments. The current work develops a model of the formation of hierarchically organized modules in originally undifferentiated systems. The specific question addressed by the generative modeling is: can hierarchical modularity emerge from vertices that simply adapt their outgoing edge weights to other vertices so as to increase their relevance within the network? Simulations demonstrate several phenomena related to the emergence of multi-scale modules: 1) reciprocal connections (when vertex A connects strongly to B, then B connects strongly to A), 2) densely connected cliques above the dyadic level, 3) increasing modularity and number of communities as the network evolves, 4) a fine-to-coarse pattern of community formation, and 5) an increasing amount of regularity across all eigenvectors as the network evolves. Thus, multi-scale modularity can arise in networks even without incorporating spatial proximity, explicit collaborations, fields, trust, coalitions, external inputs, or similarity, pointing to possibly widespread applicability to economic trade, social organizations, professional networks, social media structures, and neural circuits. | Robert Goldstone Indiana University |
John Dunn University of Western Australia |
A test of differentiation and criterion shift accounts of the strength-based mirror effect. | In recognition memory, a mirror effect occurs when “strong” items have a higher hit rate and lower false alarm rate than “weak” items. The strength-based mirror effect occurs when strong items are studied under conditions that improve subsequent memory, such as increased encoding time or multiple presentations for each item. There are currently two competing explanations of this effect. The differentiation account proposes that a strong list leads to a relative increase in memory strength for targets coupled with a relative decrease in strength for lures. The criterion shift account proposes that there is no difference in the memory strength of lures but there is a difference in response criteria for weak and strong lists. These accounts cannot be distinguished by merely comparing strong and weak conditions because of a fundamental indeterminacy. We describe this problem and investigate an indirect test based on differences in word frequency with unexpected results. | John Dunn University of Western AustraliaRachel Stephens University of AdelaideLaura Anderson Binghamton University |
Cleotilde Gonzalez Carnegie Mellon University |
Exploration-Exploitation in Sequential Decisions from Experience | Exploration-exploitation tradeoffs are commonly studied using multi-armed bandit problems, where decision-makers must balance trying new options with leveraging known rewards. Reinforcement Learning (RL) models address this tradeoff probabilistically, deciding whether to explore uncertain options or exploit the best-known choice. In sequential decision problems, the tradeoff is often framed in terms of search length—whether to continue searching or commit to the best available option. Traditional models assume threshold policies, where decisions to stop (exploit) or continue (explore) are based on predefined value criteria. We argue that exploration-exploitation decisions emerge from experience, where expected values for actions are dynamically updated through sequential choices and feedback, rather than relying on explicit rules or thresholds. A decision-making model based on Instance-Based Learning Theory captures these tradeoffs adaptively, offering a more flexible and cognitively plausible account of human decision-making. | Cleotilde Gonzalez Carnegie Mellon University |
David Broniatowski George Washington University |
Evidence for Fuzzy-Trace Theory Over Prospect Theory in Decisions Under Risk | Prospect Theory (PT) predicts a “Fourfold Pattern of Risk Attitudes:” an opposite pattern of risk seeking/avoidance for low-probability versus high-probability gains and losses. In contrast, Fuzzy Trace Theory (FTT) explains subjects’ choices as a consequence of how the categorical gists of decision options are encoded. We tested these competing predictions using an online sample of N=285 subjects, manipulating frame (gain vs. loss), probability (5% vs. 95%), and whether complements in the risky gamble were truncated to emphasize the gist (verbatim: some vs. some, standard, and gist: some vs. none). Assuming a reward value of 10,000, standard parameter values for PT predict the fourfold pattern of risk preference. Although we observed statistically-significant interaction between frame and probability, F (1,283) =14.66, p<.001, h2p = 0.05, results were opposite of PT’s predictions. PT also predicts that people overweight low and underweight high probabilities. Contrary to PT’s predictions, binomial tests showed that most subjects (174; 61%) agreed that 5% is essentially nil, p<0.001 and that 95% is virtually certain, (204 subjects; 72%, p<0.001). PT also predicts decreasing marginal returns for gains but increasing marginal returns for losses. Although subjects were more likely to agree that 9500 is the same as 10,000 in the gain frame (68%) compared to the loss frame (51%), X2(1) = 8.71, p=0.003, they were more likely to agree that 500 is nil in the loss frame (50%) compared to the loss frame (35%), X2(1) = 6.51, p=0.01. Per FTT’s predictions, we observed a statistically-significant interaction between frame and truncation, F (2,566) =7.04, p<0.001, h2p = 0.34 with subjects experienced stronger framing effects in the gist condition and attenuated framing effects in the verbatim condition. Across all problems, subjects endorsing gist statements made decisions consistent with these gists. | David A. Broniatowski GWUValerie F. Reyna Cornell |
Andrew Hanson Indiana University, Emeritus |
The Magic of Matching | The human eye can easily detect, perceive, and decode the 3D positions for a pair of identical 3D object clouds differing only by a rigid rotation. But it is extremely difficult for the human vision system to say anything more about such a pair of clouds. An ingenious family of computer algorithms, however, can find the rotation relating such clouds in the blink of an eye. We will describe a new addition to the family of these algorithms based on closed form quaternion least squares methods for the cloud matching task. We will present a spectrum of new insights into both the cloud to cloud matching task and the related orthographic projection (OnP) case, for which the second 3D cloud's data are missing and only an orthographic 2D image is available as input data. Time permitting, similar novel insights into the perspective projection (PnP) problem will be suggested. | Andrew Hanson Indiana University, Emeritus |
Stephan Lewandowsky University of Bristol |
Gender quotas and meritocracy | The U.S. administration has taken strong action against what they call “discriminatory” Diversity, Equity, and Inclusivity (DEI) measures by public institutions as well as private corporations. The anti-DEI stance appears to be (at least ostensibly) motivated by a commitment to merit as the sole basis for hiring decisions and performance evaluations. Indeed, at first glance, measures such as gender quotas (requiring a balance between men and women in an organization or team) appear to run counter to relying solely on merit as a criterion for hiring or promotion. On closer inspection, however, it turns out that gender quotas frequently increase performance of a team or an organization because the use of quotas broadens the pool of competent applicants and prevents mediocre individuals of a dominant group (usually men) from being hired or promoted. We review the literature pertaining to gender quotas and show that quotas very often have a beneficial effect on performance. We then present two studies that show that an empathy-based approach that affirms people’s basic values can shift people’s attitudes towards gender quotas in a positive direction, including among people who were against gender quotas before the study commenced, when the compatibility of quotas with meritocracy is explained. | Stephan Lewandowsky University of BristolDawn Holford University of Bristol |
Klaus Oberauer University of Zurich |
Steps to a computational model of working memory | I will outline a computational model that explains some benchmark findings about working memory (WM). The model builds on the following assumptions. Items of a memory set are encoded sequentially by creating temporary bindings between their elements. Bindings are mediated by a population of neurons. Each item recruits a constant proportion of available binding neurons, which are committed exclusively to that item. This creates a primacy gradient of binding strength. Encoding of each new event releases some committed binding neurons, creating a recency gradient of binding strength. Response selection is governed by a signal-discrimination process. Together, these assumptions explain serial-position effects and the set-size effect. Free time after encoding an item allows for the gradual release of those committed neurons that contribute the least to encoding the item. This pruning process frees up binding capacity for subsequently encoded items with hardly any cost for preceding items. This assumption explains the beneficial effect of free time, and the cognitive-load effect in complex span tasks. | Klaus Oberauer University of Zurich |
Angela Nelson Lowe University of California, San Diego |
Memory Models Can Read: Combining Computational Models of Memory & Reading with SARK-R | The process of learning to read has been highly studied in scientific literature, with some scholars suggesting that the investigation of the teaching of reading began over 400 years ago (Fries, 1964). Since the emergence of psycholinguistics in the 1950s, many psychologists have studied this process, however, few if any of these investigations evaluate the process of learning to read in the context of what is already known about memory. The purpose of this project therefore is to use existing memory model architecture to provide a different and potentially valuable way of describing the process of learning to read. In this talk I will describe a new extension of the SARKAE memory model (Nelson & Shiffrin, 2013), termed SARK-R, that simulates the process of learning to read using the model’s interactive connections between episodic and semantic memory systems. By relying on the interactions between knowledge and events, the model can provide an explanation for various components of the learning to read process such as building an orthographic lexicon that includes differential variability for a letter’s visual information and sound information, the self-teaching hypothesis of whole-word recognition, and characteristic error patterns in developing readers. This talk will describe the model structure and simulation processes, fits of the model to existing findings, and possible extensions that could test the model assumptions with behavioral studies. | Angela Nelson Lowe University of California, San DiegoSabrina Y. Ha University of California, San DiegoAnne S. Yilmaz University of California, San Diego |
Ian Krajbich UCLA Psychology |
The dynamics of open-ended decisions | In open-ended decisions, options are often ill-defined and must be generated by the decision maker. How do people make decisions without predefined options? Do people form a consideration set first and choose from it? Or do they evaluate the options while recalling them from memory? Our study answers this question using a novel two-session experiment with 30 decision categories. We find that options that are recalled earlier and options that are more favorable are chosen more often and also chosen faster. The computational model where decision makers decide while searching through their memory can account for choices and response times (RT) in open-ended decisions and can accurately capture the choice share differences under different time constraints. Together, our behavioral and model-based findings shed light on the cognitive mechanisms of memory-based decisions, advancing the understanding of open-ended decisions. | Xiaozhi Yang University of PennsylvaniaZhihao Zhang University of VirginiaMing Hsu UC Berkeley HaasIan Krajbic UCLA Psychology |
Thomas Palmeri Vanderbilt University |
Neurocomputational model of neurophysiology, electrophysiology, and behavior | We showed recently that our neurocomputational model of target selection during visual search called Salience by Competitive and Recurrent Interactions (SCRI) accounts for neurophysiology (single-unit neural spiking activity) and, when combined with our Gated Accumulator Model (GAM), accounts for saccade response times in awake behaving non-human primates. SCRI assumes that a salience representation is produced by interacting computations of detecting each stimulus (localization) and matching each stimulus to the target (identification). We show that the temporal dynamics of representations in SCRI not only accounts for neurophysiology and behavior but also explains the timing and amplitude of a non-human primate analogue of the N2pc, a key electrophysiological neural correlate of attention. We also observe that mathematically opposing identification and salience signals are necessary to account for the variation of N2pc latency and amplitude with the number of distracting items during search in a manner not inconsistent with the finding of opposing biophysical sources for the N2pc. | Thomas Palmeri Vanderbilt UniversityGiwon Bahg Vanderbilt UniversityGregory E. Cox University of AlbanyGordon D. Logan Vanderbilt UniversityJeffrey D. Schall York University |
Marcel Binz Helmholtz Munich |
Hypothesis testing in natural language | Most theories in the behavioral sciences are still verbal. I’ll present a framework for turning such theories into quantitative predictions that can be evaluated against each other. The core of this framework relies on using large language models as prediction engines. I'll discuss advantages, disadvantages, and open questions coming with this framework. Finally, I'll present a case study of how it can be used to identify human preferences. | Marcel Binz Helmholtz Munich |
Bruno Bocanegra Erasmus University Rotterdam |
Prompting LLMs to induce novel structure | Over the past years, large language models (LLMs) have shown remarkable abilities to solve complex tasks, which, when executed by human participants, require higher-level problem-solving and reasoning abilities. At the same time, both 'pure' LLMs as well as more recent 'reasoning' models display surprising brittleness in these abilities as they are often led astray by superficial characteristics of the task, such as the specific vocabulary used to instantiate the problem. These striking failures can often be traced back to the distribution of patterns of tokens present in the models' original training data, whereby the specific task instantiation used a pattern of tokens that mismatched the problem. In our latest series of experiments, we have been assessing various LLMs on their abilities to induce novel structure in a simple problem-solving task that systematically varied the tokens used to instantiate the task. Our experimental results suggest that the full range of performance (from ceiling to floor) that current state-of-the-art LLMs display can be explained by assuming the models perform optimal Bayesian inference based on prior knowledge and the data provided to them in the prompt. | Bruno Bocanegra Erasmus University Rotterdam |
Jonathan Tullis University of Arizona |
The Mind Reader’s Dilemma: How We Misjudge What Others Know | Accurately estimating others’ knowledge is vital when navigating social environments. For example, teachers must anticipate their students’ understanding to plan lessons and communicate effectively. Yet, research consistently shows we have systematic biases in our estimates about what other people know. One’s own knowledge, for example, has been labeled a “curse” because it can bias estimates of what others know. In this talk, I will examine situational factors and individual differences that exacerbate or reduce biases in estimates about what others know. We will model social metacognitive predictions within a cue-utilization framework of social metacognition, in which predictions of others’ knowledge are dynamically generated by estimators who weigh available and salient cues. We argue that the availability of diagnostic cues and the failure to appropriately shift among relevant cues causes systematic impairments in predictions. | Jonathan Tullis University of ArizonaYaoping Peng Hunnan Normal University |
Patricia L Foster Indiana University, Bloomington |
Bird Flu: Will it be the next pandemic? | The highly pathogenic A(H5N1) avian influenza virus, first detected in wild birds in 1997, has since spread globally, carried by migratory birds. Starting in 2022 the virus was detected in US domestic poultry, resulting, to date, in the loss of nearly 200 million birds. In 2024 an unexpected jump spread the virus to domestic cows, followed by transmission to humans and other mammals. In the US to date there have been 70 confirmed cases of bird flu in humans and two deaths. In this lecture I will examine the differences between the human and bird flu virus, what makes the bird flu virus able to infect so many different species of animals, and the threat it poses to widespread human spread. I will discuss preventative measures, possible treatments, and whether effective vaccines are being developed. | Patricia L Foster Indiana University, Bloomington |
Fabien Mathy Université Côte d'Azur |
Reevaluating spatialization in working memory: Implications for models of mental scanning | Spatialization in working memory refers to the mental organization of serial information on a horizontal axis, a phenomenon presumably shaped by reading and writing habits. In Western individuals, early items in a sequence are preferentially associated with the left side of a putative mental line, while later items align with the right, leading to response compatibility effects at recognition. The prevailing account likens working memory to a mental board, suggesting that serial order is grounded in spatial attention mechanisms. However, this framework implies both continuous and symmetric effects of item positions—patterns that are not consistently reflected in empirical data. Notably, response time differences between hands are largely driven by primacy and recency effects, with middle items contributing minimally to spatialization. Additionally, response asymmetries challenge the assumption that both hands equally contribute to response compatibility with the internal space. Also, although not yet fully understood, spatialization challenges sixty years of mental scanning studies, potentially explaining conflicting results and replication issues. Specifically, previous studies may have overlooked that response time variations as a function of serial position depend on response hand assignment, which has not always been clearly specified, as seen in some iterations of Sternberg’s paradigm. This work underscores the need for a more precise conceptualization of spatialization in working memory and its methodological implications for studying recognition in general in working memory. | Fabien Mathy Université Côte d'Azur |
Ed Awh University of Chicago |
Content-independent indexing mediates capacity limits in visual working memory | Although past neural studies of working memory (WM) have focused on stimulus-specific activity that tracks the stored feature values, a separate line of evidence has revealed neural signals that track the number of items in WM, independent of the contents of those items. Thus, a common neural signature of WM load can be identified for highly distinct visual features, and even across visual and auditory sensory modalities. Our working hypothesis is that these content-independent load signals reflect the operation of spatiotemporal “pointers” that enable the binding of items to the surrounding event context, and that contextual binding is a necessary component of WM storage. To test this hypothesis, we applied representational similarity analysis (RSA) to EEG data to determine the number of pointers deployed across set sizes that ranged from 1 to 8 items. The findings describe a “neural load” function that rises with increasing numbers of to-be-stored items, while controlling for differences in sensory energy and spatial attention. Critically, this function differed sharply as a function of individual differences in WM capacity. Subjects with higher capacity showed a monotonic rise in the number of pointers deployed that leveled off at higher set sizes. By contrast, low capacity subjects showed an initial rise followed by a sharp decline in the number of pointers deployed at higher set sizes. This empirical pattern dovetails with past behavioral and neural studies that have documented increased costs for low capacity observers as the number of memoranda exceeds capacity. We conclude that content-independent indexing is a core component of individual differences in WM ability. | Ed Awh University of ChicagoHenry Jones University of ChicagoDarius Suplica University of Chicago |
Stuart Smiley 5AM Solutions |
How I learned to stop worrying and love the coding | Science is hard. So why make it even harder on yourselves? In this talk, I will discuss one way to make science easier on yourself. Or in other words: how I learned to stop worrying and love the coding. Based on my over 25 years of experience as a software engineer and the research and work of others, we will discuss how we can use cognitive science and working memory to write better software. We will also discuss why this is important to science. As scientists, you all are experts in the design and development of meaningful experiments. Then you must follow up with analyzing the data collected. For many of you, this involves writing a significant amount of software, often in languages such as R and Python, for both data collection and data analysis. Writing good software (i.e., software that actually does what you think it should be doing) and reading someone else’s software are cognitively demanding tasks. Using the research of cognitive science, software engineers have developed strategies that reduce the amount of information we have to hold in working memory to write quality code. I will share these research-based strategies to illustrate how one can make code better, more reliable, and easier to understand, thereby improving the science it supports. | Stuart Smiley 5AM Solutions |
Jennifer Lentz Indiana University |
The contributions of left and right ears in dichotic listening | In this talk, I will discuss a novel application of cumulative hazard ratios based on reaction time to evaluate the relative roles of the left and right ears in dichotic listening. Experimental data in which listeners reacted to two different digits presented to the left and right ears will also be provided, as an illustration of the advantages of this efficiency metric. In the experiment, two different decision rules were used. In the OR condition, participants pressed one key when either digit was odd and another key otherwise. In the AND condition, they pressed one key when both digits were odd and another key otherwise. Young listeners with normal hearing demonstrated a greater reliance on the right ear, which tended to dominate the binaural percept, but the degree of dominance of the right ear differed between AND and OR conditions. This approach directly quantifies the contributions of the two ears than the more commonly used accuracy-based measures or differences in mean reaction time. I will discuss the potential applications of benefits of using this metric for assessing binaural abilities for individuals with asymmetrical hearing loss and rehabilitative strategies such as cochlear implants or hearing aids. | Jennifer Lentz Indiana University |
Ken Malmberg USF |
The von Restorff Effect in Free Recall, Recognition, and Source Memory | VON RESTORFF EFFECTS 2 Abstract Distinct items encountered in a sequence are better recalled than the less distinctive items (Von Restorff, 1933). The is often referred to as a von Restorff or an isolation effect. There is rarely an isolation effect for recognition, which is inconsistent with intuition and all known theories of memory. Three experiments extend prior findings to a multi-list procedure, confirming a free recall advantage for unexpected words, but no recognition advantage unless recall is tested before recognition. A somewhat ambiguous effect was observed when source memory was tested. Based on these results, we hypothesized that the lack of a von Restorff effect for recognition is due to constraints on traditional designs used to study isolation effects and perhaps uncontrolled factors during testing. In three additional experiments targeted specifically at observing isolation effects in recognition and source memory, we obtained more observations per subject in the critical condition, reducing measurement error, and the order in which items in recognition and source memory were tested was controlled, and the results revealed von Restorff effects for both recognition and source memory. Implications for models of memory and attention are discussed. | Ken Malmberg USFSiri-Maria Kamp Trier University |
Bertrand Thirion Inria |
Brain functional alignment in the age of generative AI | Anatomical and functional inter-individual variability poses a significant challenge to group analysis in neuroimaging studies. While anatomical templates help mitigate morphological differences by coregistering subjects in fMRI, they fail to account for functional variability, often leading to blurred activation patterns on the template due to group-level averaging. To address this problem, hyperalignment identifies fine-grained correspondences between functional brain maps of different subjects, with Procrustes analysis and optimal transport being among the most effective approaches. However, many hyperalignment based imaging studies rely on the selection of a single target subject as the reference to which all other subjects' data are aligned; the introduction of functional templates eliminates the need for this arbitrary selection, effectively encapsulating population similarities while preserving anatomical coherence. In this talk, we first review the strongest brain alignment techniques and the resulting template estimation procedures. We discuss the validation of such templates through decoding experiments, both in standard classification settings and using recent generative decoding approaches, where functional alignment integrates well with deep phenotyping approaches. We present preliminary results on the impact of functional alignment on cross-species brain data analysis. | Bertrand Thirion Inria |
Michelle Awh University of Pennsylvania |
A mouse model of binge eating increases habitual behavior | Binge eating is characterized by episodes of uncontrollable food consumption, independent of physiological need. A key part of this phenotype is the extension of habitual tendencies to other domains. Habitual behavior can persist independently of the link between action and reward, while goal-directed behavior is flexible and sensitive to changes in this relationship. There is strong motivation to develop animal models of binge eating to enable direct study of the neural circuits underlying this behavior. We show that when mice are given daily restricted access to a palatable high-fat diet (RA HFD), they consume over half of their daily caloric intake within just two hours, a concentrated intake resembling binge eating. We assess goal-directed behavior using an operant task in which mice learn to press a lever for a food reward. Subsequently, the reward is made available regardless of lever pressing. Mice with a history of RA HFD continue pressing despite the weakened action-outcome contingency, suggesting a shift from goal-directed control towards habitual behavior. | Michelle Awh University of PennsylvaniaNicholas K Smith University of PennsylvaniaJ Nicholas Betley University of Pennsylvania |
Bruno Nicenboim Department of Cognitive Science & Artificial Intelligence, Tilburg University |
Reading as a continuous flow of information | Traditional models of reading often assume a series of discrete stages in which each level of representation (e.g., orthography, lexical access) is processed separately. Because reading times are too short to accommodate fully serial processing, existing models typically leverage parafoveal preview—partial processing of upcoming words before direct fixation—and posit that some stages operate in parallel (as in E-Z Reader and SWIFT). However, these models focus largely on eye-movement control and struggle to incorporate stages that deal with the higher-order processes supporting comprehension. I propose the CoFI (Continuous Flow of Information) Reader, which replaces strictly sequential stages with a dynamic system of concurrent processing to explain how humans rapidly comprehend text. To demonstrate its ability to capture reading behavior, I implement CoFI Reader as a hierarchical Bayesian model and fit it to self-paced reading data. In this modeling framework, partially completed outputs from lower-level processes continuously feed into higher-level representations, while the final layer inhibits a stochastic timer. Once the timer threshold is reached, the reader is ready to read the next word. Crucially, earlier words continue to influence ongoing linguistic processing even after the reader has moved on. Although the experimental paradigm that is used as reading data precludes parafoveal preview, CoFI Reader successfully replicates both the short latencies and the systematic spillover effects observed in empirical studies—findings that require parafoveal preview. This talk will describe key aspects of the model’s architecture and parameterization, highlight its empirical fit to real-world data, and discuss the broader implications for theories of reading. | Bruno Nicenboim Department of Cognitive Science & Artificial Intelligence, Tilburg University |
Mathieu Servant Université Marie et Louis Pasteur (France) |
Deciding with muscles | Goal-directed behavior requires a translation of our choices into actions through muscle contractions. Research in cognitive psychology and neuroscience has heavily focused on the decision process during the past sixty years, but the articulation between decision and motor systems remains poorly understood. Progressing on this issue is important for several reasons. From the perspective of decision sciences, it is unclear if the motor system simply executes our choices or actively contributes to deliberation. From the perspective of motor sciences, understanding movements requires an understanding of neural inputs to muscles and upstream processes that determine them. Finally, several psychiatric and neurological disorders appear to affect both decision and motor systems, as suggested by apathy and impulse control disorders, psychomotor retardation symptoms, and paradoxical movements, among others. A precise understanding of these alterations and the development of efficient therapeutic strategies requires an integrated theory of decision and motor processes. In the first part of this talk, I will show that the electrical activity of muscles involved in the response during choice laboratory tasks is strikingly consistent with the process of evidence accumulation assumed by sequential sampling decision models. I will then introduce a theoretical framework linking decision and motor processes, the gated cascade diffusion model, which provides a good quantitative account of both behavioral and muscular data. | Mathieu Servant Université Marie et Louis Pasteur (France)Nathan J. Evans University of Liverpool (England)Gordon D. Logan Vanderbilt University (USA)Thibault Gajdos Aix-Marseille Université (France) |
Christoph Huber-Huber University of Trento, Italy |
Active visual perception and the role of prediction across eye movements | In the real world, our eyes move about three to four times per second in a quick and jerky way which divides the seemingly continuous subjective visual experience of the world around us in perpetually changing discrete snapshots. Surprisingly, we are not aware of these particular spatiotemporal dynamics of active visual perception. It has been suggested that the reason for this unawareness is that our brains process sensory information in a predictive way. In this talk, I will present EEG/MEG and eye-tracking coregistration studies that aim at better understanding these predictive aspects of active visual perception. I will arrive at the conclusion that active vision really changes how we see and I will end with some considerations about the neural basis for the temporal structure of active gaze behavior. In sum, this talk will highlight the importance of considering active vision in the neurocognitive study of visual perception. | Christoph Huber-Huber University of Trento, Italy |
Kimele Persaud Rutgers University - Newark |
Bayesian Models of Memory for Expectation-Congruent and Incongruent Items across Development | Bayesian models that assume recall is a mixture of prior knowledge and noisy memory traces have been used to explain memory in children and adults (Persaud et al, 2016; 2021). Yet two challenges to this modeling framework for understanding expectations and memory more broadly, persists: 1) these models have overwhelmingly been applied to recall of items that are congruent or unrelated to people’s prior expectations, and 2) current models fail to capture other factors that influence memory in development, such as attention and inhibitory control. In this talk, we first present empirical data and modeling from past work assessing the influence of prior knowledge on recall of congruent items (i.e., color recall) across development (Persaud, et al., 2021). Based on the limitations of this work, we then present simulations from a new model, akin to Hemmer & Steyvers, 2009, that captures both the impact of executive functions and expectation-incongruence on memory in children and adults. | Kimele Persaud Rutgers University - NewarkCarla Macias Rutgers University - Newark |
Fenna Poletiek Leiden University / Max Planck Institute for Psycholinhuistics, Nijmegen, Netherlands |
Mechanisms of serial recall can explain the preponderance of center embedded syntactic structures | A defining characteristic of human language is hierarchical recursion. Recursive loops (e.g. relative clauses) in sentences can either be embedded in the center of a sentence or cross each other. It is still unknown why in Indo-European languages the possibility for center-embedded (CE) recursion seems ubiquitous as in The boy A1 the dog A2 chases B2 falls B1 (A1A2B2B1), whereas crossed-dependent (CD) orderings of recursion hardly ever occur (A1A2B1B2). In both structures, serially encoded words (e.g. boy and dog) must be retrieved and bound to later upcoming words (chases and falls). The exceptional rarity of CD as compared to CE grammars is surprising considering that the latter produce dependent elements at longer distances than the former. We propose that the preponderance of CE can be explained by item retention and retrieval mechanisms of serial recall combined with word binding operations ( e.g., word A is Subject Noun of word B Verb) specific for language comprehension. Our account explains that backward retrieval (retrieving dog(A2) first, and boy(A1) next, as in CE) optimizes memory performance as compared to forward retrieval, as in CD. We design two Retrieval and Binding Performance (RBP) functions, for CE and CD, and show by numeric comparison that RBP for CE is larger than for CD for a given sentence. Moreover, independent serial recall data support this difference in efficacy between the two strategies, under conditions that mimic sentence processing. We propose that CE is better molded to human memory than CD, which might explain why CE has prevailed during language evolution. | Fenna Poletiek Leiden University , NLPeter Hagoort MPI for Psycholinguistics, Nijmegen, NetherlandsBruno Bocanegra Erasmus University, Rotterdam, Netherlands |
Philippe Colo University of Bern & ETH Zürich |
Ethical Equilibrium Concepts in Game Theory | Equilibria are the most central concept in game theory. For a given strategic interaction between players, equilibrium concepts determine how their preferences and beliefs will balance each other out. Among existing equilibrium concepts, the Nash equilibrium has an hegemonic and rarely questioned position. While empirically successful, the logic it depicts often leads to normatively questionable outcomes. Scholars have shown that equilibrium concepts in games are equivalent to beliefs players hold regarding the game they are playing and how other players take decisions. I will call these beliefs behavioural beliefs (bbeliefs). In this paper I reflect on the nature and normativity of bbeliefs. I will argue that we can have positive deontological influence on bbeliefs and ought to do so. This will equate to adopting multiple bbeliefs and searching for evidence regarding those held by our coplayers, an attitude that equates to a form of a priori suspension of judgement on bbeliefs. | Philippe Colo |
Geoff Woodman Vanderbilt University |
Models of long-term memory already produce key signatures of visual working memory | The study of working memory and long-term memory diverged decades ago resulting in views about the diagnostic nature of certain measures that might not be valid. Specifically, it is viewed as critical to show a capacity limit to demonstrate that we are measuring visual working memory storage and not the unlimited capacity visual long-term memory store. Although sound logically, it remains to be seen whether these hallmarks of working memory might already fall out of existing models of long-term memory. Here we show that the precision and set size effects that we visual working memory researchers often use to validate our assumption that we are studying visual working memory are also observed as a natural result of the dynamics of contextual models of long-term memory storage and retrieval. Our findings motivate re-examining unified models of human memory, paired with multi-modal empirical studies that target key questions about the nature of visual working memory and long-term memory. | Sean Polyn Vanderbilt UniversityGeoff Woodman Vanderbilt University |
Ghislain Fourny EH Zurich |
Progress report on an extension theory of quantum physics | There are two assumptions behind Bell inequalities, which are broken by nature: free choice and locality. This leaves the door open to proving that Einstein was right about the incompleteness of quantum theory if we accept to weaken the free choice assumption. In this talk, I will give a status update on our recent progress in building this non-Nashian theory of physics, including an AI-based approach for determining underlying formulas based on observed correlations. | Ghislain Fourny ETH Zurich |
Sky Jiawen Liu Cardiff University |
Which violent offense is more serious? Spiking, setting fire or choking? | In this project, we examine how lay individuals assess the severity of violent offenses. Previous literature suggests that the ranking of violent offenses in terms of severity tends to be more consistent across different respondent groups and scenarios compared to numerical ratings of seriousness. However, using a pairwise ordering task, we found that the perceived severity of violent offenses varies depending on the context in which the offense occurs. | Sky Jiawen Liu Cardiff University |
Zainab Mohamed IU Bloomington |
On the fundamental processes of recognition memory | Three long-term recognition memory studies were carried out to explore the basic processes of recognition memory and the ability of current models to generalize to new settings. The experiments varied list length, stimulus type and testing format within and between participants. Response probabilities were predicted accurately by the REM model of Shiffrin & Steyvers (1997). The model ability to generalize and account for data from such a large number of conditions is surprising because REM is missing many components known to play an important role in recognition memory. This suggests that the few components it does include are fundamental and important enough to produce a good approximation to most results from recognition memory studies. These processes include incomplete or error-prone storage, comparing each test probe to memory traces, and computing a likelihood ratio that the trace is old, based on which an old/new recognition decision is made. | Zainab Rajab Mohamed IU BloomingtonConstantin G. Meyer-Grant University of FreibergRichard M. Shiffrin IU Bloomington |
Mark Steyvers University of California, Irvine |
Communicating Uncertainty with Large Language Models | Large language models (LLMs) play a growing role in decision-making, yet their ability to convey and interpret uncertainty remains a challenge. We examine two key issues: (1) how LLMs interpret verbal uncertainty expressions compared to human perception and (2) how discrepancies between LLMs’ internal confidence and their explanations create a disconnect between what users think the model knows and what it actually knows. We identify a calibration gap, where users overestimate LLM accuracy, and a discrimination gap, where explanations fail to help users distinguish correct from incorrect answers. Longer explanations further inflate user confidence without improving accuracy. By aligning LLM explanations with internal confidence, we show that both gaps can be reduced, improving trust calibration and decision-making. | Mark Steyvers University of California, Irvine |
Olgun Sadik Indiana University, Intelligent Systems Engineering |
Exploring Student Interactions with Generative AI in Engineering Education | There has been considerable contemporary interest in using generative AI tools for teaching and learning in K-16 education. Recent research has provided evidence of their effective use in enhancing instructor productivity. However, understanding how students utilize these tools is crucial for educators, especially given concerns related to ethical use. While some instructors have developed policies to limit student reliance on these tools, such measures are not sustainable, as students will inevitably engage with them.To understand student usage and provide a guiding framework, this study employs discourse analysis to explore how students interact with a generative AI in a software systems engineering course. Students were asked to share their conversations with the AI tool, along with reflections on their experiences.The interactions were analyzed to examine both the structural and functional aspects of language use. Additionally, the reflections were analyzed using thematic analysis. | Olgun Sadik Indiana University, Intelligent Systems Engineering |
Nada Aggadi Indiana University Bloomington |
Measuring Factors Associated with Identification Thresholds in Fingerprint Analysts | The goal of this research is to measure and identify factors associated with identification thresholds in fingerprint analysts. Conclusions reached in friction ridge comparisons require the application of an individual threshold. While previous studies have investigated the mechanisms behind value determinations, or on the perceived stress reported by forensic examiners, only a few have focused on the influence of personality traits and environmental factors on identification thresholds. This study measures how individual traits and workplace policies may shape these thresholds. Participants were presented with latent prints and tasked with making value determinations before conducting a latent print comparison. Post-trial, participants responded to a series of survey inquiries focusing on their personalities and interactions within the work environment. Our results demonstrate a significant positive correlation between NFC score and the number of Identification decisions as well as a | Nada Aggadi Indiana University BloomingtonTom Busey Indiana University BloomingtonMeredith Coon AVER, LLC |