Call for Papers

AISoLA provides a forum for discussing how to responsibly deal with this new potential structured in tracks addressing the following five concerns:

 

A) The nature of AI-based systems, i.e., their specific profiles in terms of strength, weaknesses, new opportunities, and (future) implications (threats and such). This track focuses mainly on neural network models, but discussion regarding other methods of machine learning or statistical methods is also welcome. The goal of this track is not to discuss whether AI-systems should be promoted, but rather their unique strength/weakness profile and how to best deal with them as they will inevitably influence our reality. Just remember the question on the influence of Cambridge Analytica during the BREXIT referendum showcase the need to regulate AI systems – and this was only a beginning as indicated by ChatGPT which has the potential to radically change our entire educational system and professional lives. This influence is independent of the fact that large-scale AI-based systems, by their nature, will never be fully understood. Corresponding track:

A1) The Nature of AI-based Systems

B) Ethical, economic and legal implications of AI-systems in practice. If autonomous AI-systems are to be employed in practical applications, responsible standards need to be set. This includes setting domain-dependent minimal standards for verification, explanation and testing of ML-systems that are deemed trustworthy and responsible enough for real-world application. In particular, it is important to know who to ‘blame’, the DL-based system or its user. This does not only concern potential error handling, but also the attribution of ownership: Who is the originator of, e.g., ChatGPT-generated text and code? Who should be paid for paintings drawn by generative image models? Already today, teachers receive texts where this questions arise with clear impact on grading. More complex is the situation with generated code which is typically based on open source libraries of different licensing. What (legal) consequence has, e.g., a copy left requirement of some included artefact for an unaware user of the generated code?

Corresponding track:

B1) Responsible and Trustworthy AI

B2) Democracy in the Digital Era

B3) Digital Humanities and Cultural Heritage in AI and IT-enabled environments

C) Ways to make controlled use of AI via the various kinds of formal methods-based validation techniques. Important is here the distinction between statistical guarantees that guarantee that failures are rare and verification which excludes errors. The point is to push the boarder of reliability forward: Can, e.g., DL-based systems for automotive driving be made reliable enough for responsible use?

AISoLA explicitly addresses ways to make controlled use of AI in the following corresponding tracks:

C1) Safety Verification of DNNs

C2) Verification meets Learning and Statistics

D) Dedicated applications scenarios which, depending on their criticality, may allow certain levels of assistance, up to cases where full automation is uncritical. It is important to systematically explore the new potential of DL via minimal viable scenarios in order to better understand the future role of the new technology and its societal and economical impact: automotive driving in controlled scenarios where, e.g., the technological level of automotive driving seems currently only economic and socially acceptable in controlled environments.To guarantee a level of concreteness, AISoLA will specifically address the application potential of DL-based technologies in the following corresponding tracks:

D1) Health Care

D2) AI Assisted Programming

D3) Publishing

D4) Automotive Driving

E) Education in times of Deep Learning: The availability of models like ChatGPT and BERT radically impacts the future way of teaching. One can either decide to try to prohibit the use of such tools and continue with traditional education and examination or to integrate them into a new concept of DL-based teaching. Whereas the former approach comes with quite some effort for control, the latter requires to rethink the way we learn, work, and teach. AISoLA aims at opening a discussion about the latter which we consider the way to go: Generative language models will invariably play a part in our lives and education should aim to match that.

Corresponding track: E1) Education in Times of Deep Learning

How to Contribute

AISoLA provides a forum for a very wide discussion comprising ’traditional’ talks, keynotes, position papers, demos, and panels in an interdisciplinary setting. In order to guarantee an adequate dissemination AISoLA supports a very flexible style of publication:

 

Important Dates:

Contributions

Deadline

On-site proceedings 18.06.2023
All other contributions 31.07.2023