
Invited
Speakers
Quirine Eijkman
Senior-Researcher/Lecturer
Security & the Rule of Law
Institute of Security and Global Affairs (ISGA), Leiden University
Title
Advocating 4 Ethics or Human Rights?: Access to Justice for Communications Surveillance in Practice
Abstract
By analysing intelligence gathering reform bills this talk discusses from a civil society perspective access to justice for communications surveillance by secrete services. In the aftermath of the WikiLeaks and Snowden revelations sophisticated oversight systems for bulk and targeted interception are being developed across Europe. In the case study of the Netherlands prior judicial consent and a binding complaint procedure has been proposed. However, although checks and balances for among others communications interception and hacking have been created, Dutch oversight mechanisms are less equipped to effectively remedy bulk data intrusions or artificial intelligence practices. Therefore, it remains a question if politicians and lawmakers desire to meet human rights standards. Furthermore, civil society focusses on human rights compliance, but what about advocating for an ethical approach.
Quirine's current research focuses on access to justice, legal self-reliance, and the (side)effects on the rule of law arising from measures taken to strengthen security. She also teaches at several courses in the realm of human rights and security. Quirine sits on the Advisory Council for the Dutch Section of the International Commission of Jurists (NJCM). She has also a seat on the Advisory Council for Delitelabs, a pre-startup school for refugees and migrants and is a member of the Dutch Helsinki Committee.
Title
There Is No AI Ethics: The Human Origins of Machine Prejudice
There Is No AI Ethics: The Human Origins of Machine Prejudice
Abstract
The immense progress of artificial intelligence in recent decades rests on our improved capacity to mine human culture for intelligence our culture has already discovered. Unfortunately, this process brings the bad as well as the good of being human with it. On the other hand, we now have tools that allow us to better understand what it means to be human, yet that knowledge and those tools by their nature change what it is they examine. In this talk I will clarify AI, demonstrate machine prejudice, then discuss the impact of ICT in general and AI in particular on society with a focus on governance and the economy. Work on machine prejudice conducted with Aylin Caliskan-Islam and Arvind Narayanan.
Abstract
The immense progress of artificial intelligence in recent decades rests on our improved capacity to mine human culture for intelligence our culture has already discovered. Unfortunately, this process brings the bad as well as the good of being human with it. On the other hand, we now have tools that allow us to better understand what it means to be human, yet that knowledge and those tools by their nature change what it is they examine. In this talk I will clarify AI, demonstrate machine prejudice, then discuss the impact of ICT in general and AI in particular on society with a focus on governance and the economy. Work on machine prejudice conducted with Aylin Caliskan-Islam and Arvind Narayanan.
Title
Practical and Ethical Considerations in Demographic and Psychographic Analysis
Abstract
Understanding people, how they implicitly and explicitly group, their linguistic patterns, what motivates them and more are all deeply interesting and long-standing questions. Industry and academic developers and researchers today have access to extensive information on people, but the data often lacks many of the core demographic and psychographic variables that pertain to many research questions and which drive some business functions (e.g. marketing). This is certainly true of social media profiles, which typically lack structured demographic information beyond names and locations---and even these are often incomplete or fabricated. As such, there has been a surge of academic and commercial interest in predicting values for gender, age, race, location, interests, personality, and more, given some portion of the information available in data about individuals, including social profiles, customer records, and more.
In the past, efforts to study people was primarily localized to the researcher and the individuals they interacted with or requested surveys from. But today, these questions can be explored at massive scale, using the public and private digital exhaust we all create. Findings are no longer simply interpretive, but instead can be additionally translated into automated programs that analyze gender, personality and more. Such programs are informed by research in natural language processing, computer vision, psychology and related fields, and they can be used for positive, negative, and mixed ends. As researchers, we are arguably still waking up to this reality, and we cannot take a neutral stance regarding the potential benefits and harms of our work. We must grapple with hard questions around privacy rights and think actively and creatively about the wider societal implications and impacts of our work. In my talk, I'll discuss specific practical and ethical aspects of such work in the context of text, graph and image analysis for understanding demographics and psychographics, with a eye toward the potential for positive impact that reduces or minimizes risk to individuals.