AIAI1: Agency and Intentions in Artificial Intelligence University of Göttingen Göttingen, Germany, May 15-17, 2024 |
Conference website | https://ail-workshop.github.io/aiai-conference/index.html |
Submission link | https://easychair.org/conferences/?conf=aiai1 |
Submission deadline | February 11, 2024 |
We cordially invite submissions from linguists, philosophers, cognitive scientists and computer scientists exploring topics related to agency and intentions with respect to human linguistic competence and/or in AI systems. Some questions and topics that will be in the scope of the conference are as follows:
- Are AI systems, or LLMs in particular, unique kinds of agents? How should we understand the human propensity to treat them as such? Do AI systems and LLMs produce linguistic outputs that can be understood through the concepts of "intentional action" or "intentions"?
- Are AI systems or LLMs unique language users? How can we best study, discuss, and engage with their linguistic outputs?
- What semantic properties and conceptions can be attributed to outputs from LLMs or other AI systems? How similar (different) are they from semantic properties and conceptions that we use for theorizing about human linguistic competence?
- What do LLMs teach us about concepts themselves, specifically those related to agency, such as "intentions" and "decision-making" and "reasons" and "judgment"? Are there fundamental differences in the way "intentional action" is captured in human language as compared to how it is captured in LLMs?
- Are LLMs participating in acts and expressions in similar ways to human agents? For example, do LLMs encode for something like an “understanding” of concepts? Do they “refer” to things and ideas in their linguistic outputs? Are they “responding” to human requests and inquiries?
- Are LLM concept vectors sufficiently grounded, i.e., are they connected in the right ways to the real world, to constitute certain semantic properties that human expressions possess?
- How are specific ethical problems related to AI informed by the above questions about the linguistic capacities of AI systems? How might those ethical issues be better addressed?
- Can cognitive scientific models of human thinking, agency, and decision-making benefit from studying LLMs? What can cognitive science tell us about how LLMs “process” information?
- How do computer scientists think about the role of agency and intentions when developing LLMs?
The list of topics above is not exhaustive. The heart of the topics is a drive to learn and discover more about AI systems as potential agents and decision-makers. While the conference is not directly focused on providing solutions to ethical problems in AI development, questions of ethics and moral responsibility both motivate the discussion and will be included in the conference. What AIAI will uniquely achieve, though, is an interdisciplinary conversation about the technical philosophical and linguistic features of the very AI systems that humans will continue to employ in ever more domains of social life. A new phase AI is here, and we think this offers new opportunities and challenges for people in all areas of life. Our goal is to meet these opportunities and challenges through the unique theoretical perspectives offered by linguists, philosophers, computer scientists, and cognitive scientists. After all, it is impossible to take practical moral action in response to AI systems if we cannot make sense of what AI is, does, or intends.
Submission opens: November 13, 2023
Submission deadline: February 11, 2024
Anonymous abstracts, not exceeding 2 pages (including references and examples), with font no less than 11 Times New Roman, and 2 cm margins, should be uploaded on AIAI EasyChair website. We expect to notify authors of their acceptance by March 11, 2024. Presentations will be allotted 30 minute slots with 15 minutes for questions and discussion.
How do you connect with a single piece of information?
Date ‘em