Tracks

A1) The Nature of AI-based Systems ΤΒΑ
B1) Responsible and Trustworthy AI

Normative Perspectives on and Societal Implications of AI Systems

As AI systems continue to permeate various sectors and aspects of our lives, ensuring their trustworthy and responsible design and use becomes critical. This conference track aims to bring together interdisciplinary research from philosophy, law, psychology, economics, and other relevant fields to explore the normative perspectives on and societal implications of developing and employing AI systems.

It will examine the challenges of setting domain-dependent minimal requirements for verification, explanation, and assessment of ML systems with the aim of ensuring trustworthy and responsible use of real-world applications.

Topics

The topics of interest include but are not limited to:

– Legal and Regulatory Challenges

– Understanding the legal consequences of using AI systems

– Robust legal frameworks to support responsible AI adoption, including methods for and challenges to compliance with the future EU AI Act

– Ethical and Psychological Challenges

– Balancing the benefits and risks of AI systems

– Ethical design and value alignment of AI systems

– Addressing AI-related fears, misconceptions, and biases

– Economic and Societal Challenges

– Assessing the impact of AI on the labor market and workforce

– Evaluating the trade-offs of AI implementation

– AI for social good: opportunities and challenges in driving economic growth and addressing societal issues

– Responsibility and Accountability

– Attributing blame and answerability between AI systems and their users

– The role of human oversight in AI decision-making

– Legal responsibility (e.g., civilliability) in case of malfunctioning AI systems

– Ownership and Attribution in AI-Generated Content

– Determining the originator of AI-generated text, code, and art

– Intellectual property rights and revenue sharing for AI-created works

– The impact of AI-generated content on education, grading, and academic integrity

– Bias, Discrimination, and Algorithmic Fairness

– Understanding and mitigating biases/promoting fairness in AI systems

– Ensuring fairness and preventing discrimination in AI applications

– Legal obligations arising from anti-discrimination laws regarding AI systems

– The role of privacy issues and data protection laws for bias mitigation

– Perspicuity in AI Systems

– The role of transparency, explainability, and traceability in societal risk mitigation

– Suitability of methods for enhancing AI system perspicuity and user understanding with respect to societal expectations and desiderata

– Transparency in socio-technical ecosystems

– Holistic Assessment of AI Models

– Evaluating AI models within larger decision contexts

– Evaluating normative choices in AI system design and deployment

– Investigating consequences of implementing AI systems for groups, organizations, or societies

 

By exploring these topics, this track will contribute to a deeper understanding of the normative perspectives and societal implications surrounding AI.

It aims to promote the development of responsible, trustworthy, and beneficial AI technologies while addressing the ethical, legal, psychological, economic, and more generally societal challenges they pose in real-world applications.

 

Track Organizers

Kevin Baum, German Research Center for Artificial Intelligence, DE

Thorsten Helfer, Saarland University, DE

Markus Langer, Philipps-University of Marburg, DE

Eva Schmidt, TU Dortmund, DE

Andreas Sesing-Wagenpfeil, Saarland University, DE

Timo Speith, University of Bayreuth, DE

C1) Safety Verification of DNNsYour Title Goes Here

Formal verification of neural networks and broader machine learning models is a burgeoning field the past few years, with continued increasing interest given the ongoing growth and applicability of these data-driven methods. This track will focus on methods for formal verification of machine learning models, including neural networks, but also beyond to other model types across application domains.

Papers and presentations including, but not limited to, describing methodologies, software frameworks, technical approaches, case studies, are all welcome contributions.

In addition, benchmarks are critical for evaluating scalability and broader progress within formal methods. Most recent benchmarks used for evaluation of neural network verification methods, as well as broader machine learning verification, have focused mostly on computer vision problems, specifically local robustness to adversarial perturbations of image classifiers.

However, neural networks and machine learning models are being used across a variety of safety and security critical domains, and domain-specific benchmarks—both in terms of the machine learning models and their specifications—are necessary to identify limitations and directions for improvements, as well as to evaluate and ensure applicability of these methods in these domains.

For instance, controllers in autonomous systems are increasingly created with data-driven methods and malware classifiers in security are often neural networks, each of which domain has its own specificities, as do countless other applications in cyber-physical systems, finance, science, and beyond.

 

This track will focus on collecting and publishing benchmarks — both models and specifications — across domains, from computer vision, finance, security, and other domains where neural network and machine learning formal verification is being considered.

The track will also aim to collect benchmarks for future iterations of the International Verification of Neural Networks Competition (VNN-COMP) and the International Competition on Verifying Continuous and Hybrid Systems (ARCH-COMP) category on Artificial Intelligence and Neural Network Control Systems (AINNCS), as well as input for the Verification of Neural Networks standard (VNN-LIB).

The event will award prizes for best benchmarks, specifications, etc.

 

Track Organizers

Daniel Neider, TU Dortmund, DE

Taylor T. Johnson, Vanderbilt University, US

C2) Verification meets Learning and Statistics

Numerous systems operate in areas like healthcare, transportation, finance, or robotics.

They interact with our everyday life, and thus strong reliability or safety guarantees on the control of these systems are required.

However, traditional methods to ensure such guarantees, such as from the areas of formal verification, control theory, or testing, often do not account for several fundamental aspects adequately.

For instance, the uncertainty that is inherent to systems that operate with data from the real world; the uncertainty coming from the systems themselves being only partially known/black box; and the sheer complexity or the astronomical size of the systems.

Therefore, a joining of forces is in order for the areas of verification, machine learning, AI planning, and general statistical methods.

Within this track, we welcome all contributions that may be placed on the interface of these areas.

Examples for concrete topics are:

– safety in reinforcement learning

– verification of probabilistic systems with the help of learning.

– statistical guarantees on system correctness, statistical model checking.

– testing and model learning under uncertainty (‘flaky’ testing).

 

Track Organizers

Jan Křetínský, TU Munich, DE

Kim Larsen, Aalborg University, DK

Nils Jansen, Radboud University Nijmegen, NL

Bettina Könighofer, TU Graz, AT

C3) Compilation and Execution Environments ΤΒΑ
D1) Health Care

 

AI in healthcare is transforming the field by improving diagnostics, aiding in medical imaging analysis, personalizing treatment, and supporting clinical decision-making. It enables faster and more accurate analysis of medical data, enhances drug discovery, and assists in robot-assisted surgeries. AI also contributes to predictive analytics, virtual assistants, wearable devices, and clinical decision support.

However, it is important to remember that AI is a tool to support healthcare professionals rather than replace them, and ethical considerations and data privacy are crucial in its implementation.

This track is devoted to discussions and exchange of ideas on questions like:

Explainability and Interpretability: How can AI algorithms be made transparent and understandable to healthcare providers and patients?

  1. Data Quality and Integration: How can diverse healthcare data sources be integrated while ensuring data quality and interoperability?
  2. Ethical and Legal Considerations: What ethical and legal frameworks should be established to address privacy, consent, bias, and responsible AI use?
  3. Validation and Clinical Implementation: How can AI algorithms be rigorously tested and integrated into clinical workflows?
  4. Robustness and Reliability: How can AI systems be made robust, reliable, and adaptable to changing patient populations and data quality?
  5. Human-AI Collaboration: How can AI systems effectively collaborate with healthcare professionals?
  6. Long-term Impact and Cost-effectiveness: What is the long-term impact and cost-effectiveness of AI in healthcare?
  7. Regulatory and Policy Frameworks: What regulatory and policy frameworks are needed for the development and deployment of AI in healthcare?

 

These research questions drive efforts to address technical, ethical, legal, and societal challenges to maximize the benefits of AI in healthcare.

Note that this text was mostly generated using ChatGPT.

 

Track Organizers

Martin Leucker, University of Lübeck, DE

D2) AI Assisted Programming

 

Neural program synthesis, using large language models (LLMs) which are trained on open-source code, are quickly becoming a popular addition to the software developer’s toolbox.

Services like, for instance, ChatGPT and GitHub Copilot, and its integrations with popular IDEs, can generate code in many different programming languages from natural language requirements. This opens up for fascinating new perspectives, such as increased productivity and accessibility of programming also for non-experts.

However, neural systems do not come with guarantees of producing correct, safe, or secure code. They produce the most probable output, based on the training data, and there are countless examples of coherent but erroneous results.

Even alert users fall victim to automation bias: the well-studied tendency of humans to be over-reliant on computer generated suggestions.

The area of software development is no exception to this automation bias.

 

This track is devoted to discussions and exchange of ideas on questions like:

– What are the capabilities of this technology when it comes to software development?

– What are the limitations?

– What are the challenges and research areas that need to be addressed? – How can we facilitate the rising power of code co-piloting while achieving a high level of correctness, safety, and security?

– What does the future look like? How should these developments impact future approaches and technologies in software development and quality assurance?

– What is the role of models, tests, specification, verification, and documentation in conjunction with code co-piloting?

– Can quality assurance methods and technologies themselves profit from the new power of LLMs?

 

Topics of relevance to this track include the interplay of LLMs with the following areas:

– Program synthesis

– Formal specification and verification

– Model driven development

– Static analysis

– Testing

– Monitoring

– Documentation

– Requirements engineering

– Code explanation

– Library explanation

– Coding tutorials

 

Track Organizers

Wolfgang Ahrendt,  Chalmers University of Technology, SE

Klaus Havelund, NASA Jet Propulsion Laboratory, US

D3) Publishing ΤΒΑ
D4) Automotive Driving ΤΒΑ
E1) Education in Times of Deep Learning ΤΒΑ