Tracks

A1) The Nature of AI-based Systems ΤΒΑ
B1) Responsible and Trustworthy AI

Normative Perspectives on and Societal Implications of AI Systems

As AI systems continue to permeate various sectors and aspects of our lives, ensuring their trustworthy and responsible design and use becomes critical. This conference track aims to bring together interdisciplinary research from philosophy, law, psychology, economics, and other relevant fields to explore the normative perspectives on and societal implications of developing and employing AI systems.

It will examine the challenges of setting domain-dependent minimal requirements for verification, explanation, and assessment of ML systems with the aim of ensuring trustworthy and responsible use of real-world applications.

Topics

The topics of interest include but are not limited to:

  • Legal and Regulatory Challenges

– Understanding the legal consequences of using AI systems

– Robust legal frameworks to support responsible AI adoption, including methods for and challenges to compliance with the future EU AI Act

  • Ethical and Psychological Challenges

– Balancing the benefits and risks of AI systems

– Ethical design and value alignment of AI systems

– Addressing AI-related fears, misconceptions, and biases

  • Economic and Societal Challenges

– Assessing the impact of AI on the labor market and workforce

– Evaluating the trade-offs of AI implementation

– AI for social good: opportunities and challenges in driving economic growth and addressing societal issues

  • Responsibility and Accountability

– Attributing blame and answerability between AI systems and their users

– The role of human oversight in AI decision-making

– Legal responsibility (e.g., civilliability) in case of malfunctioning AI systems

  • Ownership and Attribution in AI-Generated Content

– Determining the originator of AI-generated text, code, and art

– Intellectual property rights and revenue sharing for AI-created works

– The impact of AI-generated content on education, grading, and academic integrity

  • Bias, Discrimination, and Algorithmic Fairness

– Understanding and mitigating biases/promoting fairness in AI systems

– Ensuring fairness and preventing discrimination in AI applications

– Legal obligations arising from anti-discrimination laws regarding AI systems

– The role of privacy issues and data protection laws for bias mitigation

  • Perspicuity in AI Systems

– The role of transparency, explainability, and traceability in societal risk mitigation

– Suitability of methods for enhancing AI system perspicuity and user understanding with respect to societal expectations and desiderata

– Transparency in socio-technical ecosystems

  • Holistic Assessment of AI Models

– Evaluating AI models within larger decision contexts

– Evaluating normative choices in AI system design and deployment

– Investigating consequences of implementing AI systems for groups, organizations, or societies

 

By exploring these topics, this track will contribute to a deeper understanding of the normative perspectives and societal implications surrounding AI.

It aims to promote the development of responsible, trustworthy, and beneficial AI technologies while addressing the ethical, legal, psychological, economic, and more generally societal challenges they pose in real-world applications.

Submission 

Please submit your contributions here: https://equinocs.springernature.com/service/AISoLA2023_RTAI

Submitting an Abstract 

Deadline for submissions of abstracts: July 31st, 2023

Please submit an abstract of up to 500 words.

Talks will be up to 45 min long and there will be plenty of time for discussions.

(Following the conference, there will also be an opportunity to submit related articles in double-blind reviewed (post-)proceedings We are still organising the details.)

 

Name Institution
Kevin Baum German Research Center for Artificial Intelligence, DE
Thorsten Helfer Saarland University, DE
Markus Langer Philipps-University of Marburg, DE
Eva Schmidt TU Dortmund, DE
Andreas Sesing-Wagenpfeil Saarland University, DE
Timo Speith University of Bayreuth, DE
B2) Democracy in the Digital Era

Keynote by Moshe Y. Vardi: Technology and Democracy

For more than a decade now, studies by different organizations on the state of democracy world-wide, while using different indices and methodologies, arrive at very similar conclusions: there has been a continuous quantitative and qualitative decline of democratic practices, including participation in and integrity of elections, civil liberties and the rule of law.

Many analysts trace the origin of this decline back to the period 1990’s, following the fall of the Iron Curtain and characterized by the euphoric belief that democracy was a sort of natural state that would be inevitably not only preserved but spread broadly via capitalism and globalization.

This optimism was further reinforced in the 1990’s and 2000’s by the “blossoming” of the internet and the World Wide Web which promised to usher in a digital cultural renaissance which would reinvent and strengthen democracy.

This optimism turned out to be utopic, as democracy today is seen to be facing threats some of which are in fact magnified by the socio-political impact of digital technologies.

While economic inequalities, the effects of unrestrained globalization and constitutional fault lines are cited as the leading causes for the decline of democracy, these are more and more closely intertwined with the role played by digital technologies and the role of Big Tech and their platforms in particular.

In the current context with the potentially transformational generative AI developments, the concentration of economic and political power in the hands of a very small number of very big companies further magnifies the threats to democratic processes and institutions and the erosion and manipulation of the public sphere.

We are in fact witnessing an immense concentration of economic and political power which, those holding it, can use it to wield vast control over both our civic and individual lives. Technology, since the beginning of history, had significant and occasionally transformational socio-political impact with, inadvertently, positive and negative aspects.

The Monday afternoon session aims to examine the democracy technology interaction, identify threats and opportunities and, when possible, formulate proposals for sustaining democracy in the Digital Era.

Track Organizers 🔗

Name Institution
George Metakides
B3) Digital Humanities and Cultural Heritage in AI and IT-enabled environments

Where next? 

We are in the middle of an AI and IT revolution and at a point of digital cultural heritage data saturation, but humanities’ scholarship is struggling to keep pace. In this Track we discuss the challenges faced by both computing and historical sciences to outline a roadmap to address some of the most pressing issues of data access, preservation, conservation, harmonisation across national datasets, and governance on one side, and the opportunities and threats brought by AI and machine learning to the advancement of rigorous data analytics. We concentrate primarily on written/printed documents rather than on pictures and images.
We stress the importance of collaboration across the discipline boundaries and their cultures to ensure that mutual respect and equal partnerships are fostered from the outset so that in turn better practices can ensue.

In the track we welcome contributions that address these and other related topics:

  • Advances brought by modern software development, AI, ML and data analytics to the transcription of documents and sources
  • Tools and platforms that address the digital divide between physical, analog or digital sources and the level of curation of datasets needed for modern analytics
  • Design for accessibility an interoperability of data sets, including corpora and thesauri
  • Tools and techniques for machine-understanding form-based documents, recognition of digits and codes, handwriting, and other semantically structured data
  • Knowledge representation for better analysis of semi-structured data from relevant domains (diaries, registers, reports, etc)
  • Specific needs arising from the study of minority languages and populations, disadvantaged groups and any other rare or less documented phenomena and groups
  • Challenges relative to the conservation, publication, curation, and governance of data as open access artefacts
  • Challenges relative to initial and continuing education and curricular or extracurricular professional formation in the digital humanities professions
  • Spatial digital humanities

Submission 

Please submit your contributions via EquinOCS

Track Organizers 

Name Institution
Ciara Breathnach University of Limerick, IE
Tiziana Margaria University of Limerick, IE
C1) Safety Verification of DNNsYour Title Goes Here

Formal verification of neural networks and broader machine learning models is a burgeoning field the past few years, with continued increasing interest given the ongoing growth and applicability of these data-driven methods. This track will focus on methods for formal verification of machine learning models, including neural networks, but also beyond to other model types across application domains.

Papers and presentations including, but not limited to, describing methodologies, software frameworks, technical approaches, case studies, are all welcome contributions.

In addition, benchmarks are critical for evaluating scalability and broader progress within formal methods. Most recent benchmarks used for evaluation of neural network verification methods, as well as broader machine learning verification, have focused mostly on computer vision problems, specifically local robustness to adversarial perturbations of image classifiers.

However, neural networks and machine learning models are being used across a variety of safety and security critical domains, and domain-specific benchmarks—both in terms of the machine learning models and their specifications—are necessary to identify limitations and directions for improvements, as well as to evaluate and ensure applicability of these methods in these domains.

For instance, controllers in autonomous systems are increasingly created with data-driven methods and malware classifiers in security are often neural networks, each of which domain has its own specificities, as do countless other applications in cyber-physical systems, finance, science, and beyond.

 

This track will focus on collecting and publishing benchmarks — both models and specifications — across domains, from computer vision, finance, security, and other domains where neural network and machine learning formal verification is being considered.

The track will also aim to collect benchmarks for future iterations of the International Verification of Neural Networks Competition (VNN-COMP) and the International Competition on Verifying Continuous and Hybrid Systems (ARCH-COMP) category on Artificial Intelligence and Neural Network Control Systems (AINNCS), as well as input for the Verification of Neural Networks standard (VNN-LIB).

The event will award prizes for best benchmarks, specifications, etc.

Submission 

Please submit your contributions here: https://equinocs.springernature.com/service/AISoLA2023_SVDNN

Track Organizers

Daniel Neider, TU Dortmund, DE

Taylor T. Johnson, Vanderbilt University, US

C2) Verification meets Learning and Statistics

Numerous systems operate in areas like healthcare, transportation, finance, or robotics.

They interact with our everyday life, and thus strong reliability or safety guarantees on the control of these systems are required.

However, traditional methods to ensure such guarantees, such as from the areas of formal verification, control theory, or testing, often do not account for several fundamental aspects adequately.

For instance, the uncertainty that is inherent to systems that operate with data from the real world; the uncertainty coming from the systems themselves being only partially known/black box; and the sheer complexity or the astronomical size of the systems.

Therefore, a joining of forces is in order for the areas of verification, machine learning, AI planning, and general statistical methods.

Within this track, we welcome all contributions that may be placed on the interface of these areas.

Examples for concrete topics are:

– safety in reinforcement learning

– verification of probabilistic systems with the help of learning.

– statistical guarantees on system correctness, statistical model checking.

– testing and model learning under uncertainty (‘flaky’ testing).

Submission 

Please submit your contributions here: https://equinocs.springernature.com/service/AISoLA2023_VMLS

Track Organizers

Jan Křetínský, TU Munich, DE

Kim Larsen, Aalborg University, DK

Nils Jansen, Radboud University Nijmegen, NL

Bettina Könighofer, TU Graz, AT

D1) Health Care

AI in healthcare is transforming the field by improving diagnostics, aiding in medical imaging analysis, personalizing treatment, and supporting clinical decision-making. It enables faster and more accurate analysis of medical data, enhances drug discovery, and assists in robot-assisted surgeries. AI also contributes to predictive analytics, virtual assistants, wearable devices, and clinical decision support.

However, it is important to remember that AI is a tool to support healthcare professionals rather than replace them, and ethical considerations and data privacy are crucial in its implementation.

This track is devoted to discussions and exchange of ideas on questions like:

Explainability and Interpretability: How can AI algorithms be made transparent and understandable to healthcare providers and patients?

  1. Data Quality and Integration: How can diverse healthcare data sources be integrated while ensuring data quality and interoperability?
  2. Ethical and Legal Considerations: What ethical and legal frameworks should be established to address privacy, consent, bias, and responsible AI use?
  3. Validation and Clinical Implementation: How can AI algorithms be rigorously tested and integrated into clinical workflows?
  4. Robustness and Reliability: How can AI systems be made robust, reliable, and adaptable to changing patient populations and data quality?
  5. Human-AI Collaboration: How can AI systems effectively collaborate with healthcare professionals?
  6. Long-term Impact and Cost-effectiveness: What is the long-term impact and cost-effectiveness of AI in healthcare?
  7. Regulatory and Policy Frameworks: What regulatory and policy frameworks are needed for the development and deployment of AI in healthcare?

 

These research questions drive efforts to address technical, ethical, legal, and societal challenges to maximize the benefits of AI in healthcare.

Note that this text was mostly generated using ChatGPT.

 

Track Organizers

Martin Leucker, University of Lübeck, DE

D2) AI Assisted Programming

 

Neural program synthesis, using large language models (LLMs) which are trained on open-source code, are quickly becoming a popular addition to the software developer’s toolbox.

Services like, for instance, ChatGPT and GitHub Copilot, and its integrations with popular IDEs, can generate code in many different programming languages from natural language requirements. This opens up for fascinating new perspectives, such as increased productivity and accessibility of programming also for non-experts.

However, neural systems do not come with guarantees of producing correct, safe, or secure code. They produce the most probable output, based on the training data, and there are countless examples of coherent but erroneous results.

Even alert users fall victim to automation bias: the well-studied tendency of humans to be over-reliant on computer generated suggestions.

The area of software development is no exception to this automation bias.

 

This track is devoted to discussions and exchange of ideas on questions like:

– What are the capabilities of this technology when it comes to software development?

– What are the limitations?

– What are the challenges and research areas that need to be addressed? – How can we facilitate the rising power of code co-piloting while achieving a high level of correctness, safety, and security?

– What does the future look like? How should these developments impact future approaches and technologies in software development and quality assurance?

– What is the role of models, tests, specification, verification, and documentation in conjunction with code co-piloting?

– Can quality assurance methods and technologies themselves profit from the new power of LLMs?

 

Topics of relevance to this track include the interplay of LLMs with the following areas:

– Program synthesis

– Formal specification and verification

– Model driven development

– Static analysis

– Testing

– Monitoring

– Documentation

– Requirements engineering

– Code explanation

– Library explanation

– Coding tutorials

 

Track Organizers

Wolfgang Ahrendt,  Chalmers University of Technology, SE

Klaus Havelund, NASA Jet Propulsion Laboratory, US

D3) Publishing
D4) Automotive Driving

Today, the most prominent application of AI technology in the automotive domain is in the realm of environment perception. The diversity of the traffic environment and the complexity of sensor readings make it impossible to specify and implement perception functionality manually. Deep learning technology, on the other hand, has proven itself capable of solving the task very well. However, it is important to note that effectiveness alone does not guarantee a comprehensive solution, and the issue of validation currently remains unsatisfactorily resolved.

We invite researchers, practitioners, and experts to submit their original research contributions on the safety of AI-based autonomous vehicles to AISoLA. We particularly encourage submissions of contributions that invite discussion on the basis of new findings.

Topics of interest include, but are not limited to:

  • Safety verification and validation techniques for AI-based autonomous vehicles
  • Formal methods and their application in assuring the safety of AI-based autonomous systems
  • Human-AI interaction and trust in autonomous vehicle operations
  • Robustness and resilience of AI algorithms in uncertain and open environments
  • Handling ethical considerations and responsible decision-making in autonomous driving systems
  • Safety-critical system architectures for AI-based autonomous vehicles
  • Data-driven approaches for safety assurance and risk analysis in autonomous driving
  • Safety standards, regulations, and certification processes for AI-based autonomous vehicles
  • Testing, simulation, and validation methodologies for autonomous vehicle systems
  • Security and privacy aspects of AI-based autonomous vehicles

General publication details can be found here: Contribute

Because of the late announcement we extend the deadlines for the LNCS proceedings as follows:

Submission Deadline: July 21st, 2023
Notification of Acceptance: August 4th, 2023
Camera-Ready Deadline: August 14th, 2023

Submission 

Please submit your contributions via EquinOCS

Track Organizers 

Name Institution
Falk Howar TU Dortmund, DE
Hardi Hungar University Oldenburg and German Aerospace Center, DE

E1) Education in Times of Deep Learning

Education in times of Deep Learning: The availability of models like ChatGPT and BERT radically impacts the future way of teaching. One can either decide to try to prohibit the use of such tools and continue with traditional education and examination or to integrate them into a new concept of DL-based teaching. Whereas the former approach comes with quite some effort for control, the latter requires to rethink the way we learn, work, and teach. AISoLA aims at opening a discussion about the latter which we consider the way to go: Generative language models will invariably play a part in our lives and education should aim to match that.