Geför­dert durch:


Schnei­der, Dia­na & Sie­bert, Scar­let (2019, 6./7.5.): Ses­si­on Pro­po­sal: App­ly­ing arti­fi­cial intel­li­gence on vul­nerable tar­get groups: chan­ces and chal­len­ges, 18th Annu­al STS Con­fe­rence Graz 2019, Graz. [Tan­dem 5], [Tan­dem 4]

Digi­ta­liz­a­ti­on in gene­ral and arti­fi­cial intel­li­gence (AI) in par­ti­cu­lar, e.g. app­li­ca­ti­ons of big data ana­ly­tics and robo­tics, are radi­cal­ly chan­ging socie­ty. This app­lies not only to the world of indus­try and poli­tics, but also to an incre­a­sing extent to social ser­vices like edu­ca­ti­on and health­ca­re, whe­re vul­nerable groups like child­ren, elder­ly or dis­ab­led peop­le are tar­ge­ted. In this con­text, socie­tal chal­len­ges, e.g. the demo­gra­phic chan­ge, are power­ful nar­ra­ti­ves for a tech­no­lo­gy-push, that is sup­po­sed to fos­ter self-deter­mi­na­ti­on, par­ti­ci­pa­ti­on, and equa­li­ty of the­se groups. For instance, app­li­ca­ti­ons of smart home shall allow the elder­ly to stay in their fami­li­ar envi­ron­ment lon­ger (Wess­ling, 2013), while social robots are sup­po­sed to fos­ter the par­ti­ci­pa­ti­on of child­ren with spe­cial needs in edu­ca­tio­nal set­tings (Dau­ten­hahn et al., 2009; Kim et al., 2013). With the assess­ment of big data, unem­ploy­ed peop­le shall recei­ve ade­qua­te offers con­cer­ning their job oppor­tu­nities (Fan­ta, 2018) and refu­gees shall get suf­fi­ci­ent tre­at­ments con­cer­ning their health (Baeck, 2017). Fur­ther­mo­re, dan­gers to the wel­fa­re of child­ren shall be iden­ti­fied at an ear­ly sta­ge (e.g. Gil­ling­ham & Gra­ham, 2016). At the same time, the ques­ti­on ari­ses if tech­no­lo­gy might trans­fer social dis­pa­ri­ties into the digi­tal world. For instance, algo­rith­ms for pre­dic­ti­ve poli­cing seem to repli­ca­te ine­qua­li­ty becau­se they are based on bia­sed data that leads to accu­sing eth­nic and reli­gious mino­ri­ties more often than the white majo­ri­ty (e.g. Taye­bi & Gläs­ser, 2018; Dat­ta et al., 2015). Living in a social­ly depri­ved neigh­bour­hood in the ana­lo­gue world accounts for a bad digi­tal score, which might then lead to ana­lo­gous­ly exe­cu­t­ed punish­ments. Alt­hough AI is alrea­dy being used in high­ly sen­si­ti­ve are­as such as kin­der­gar­tens, wel­fa­re sta­te insti­tu­ti­ons, and aut­ho­ri­ties, the effects of this tech­no­lo­gy on the­se are­as have hard­ly been rese­ar­ched, if at all. The assess­ment of advan­ta­ges and dis­ad­van­ta­ges of AI in the­se are­as is still in its infan­cy. The­re­fo­re, this ses­si­on seeks to dis­cuss chal­len­ges and chan­ces of the app­li­ca­ti­on of AI on vul­nerable tar­get groups, that shall func­tion as a “bur­ning glass” for the cur­rent sta­te and future trends of pos­si­bi­li­ties to expe­ri­ence self-deter­mi­na­ti­on, par­ti­ci­pa­ti­on, and equa­li­ty in a digi­tal socie­ty. The­se groups inclu­de, e.g., child­ren, the elder­ly, peop­le with disa­bi­li­ties, unem­ploy­ed peop­le as well as refu­gees.
By taking into account dif­fe­rent disci­pli­nes, the ses­si­on fol­lows the con­cept of inte­gra­ted rese­arch (Stub­be, 2018), that might enab­le a broa­der view on the tech­no­lo­gi­cal impact on indi­vi­du­als (micro level) and insti­tu­ti­ons (macro level) and help ans­we­ring the fol­lowing ques­ti­ons sys­te­ma­ti­cal­ly (Man­ze­schke et al., 2013): In which ways is the app­li­ca­ti­on of arti­fi­cial­ly intel­li­gent tech­no­lo­gies ethi­cal­ly ques­tion­ab­le with respect to a cer­tain tar­get group? Which ethi­cal chal­len­ges do emer­ge from the app­li­ca­ti­on of the­se tech­no­lo­gies? How can the­se chal­len­ges be miti­ga­ted or even dis­sol­ved? To ans­wer the­se ques­ti­ons, we would like to focus on con­cep­tu­al and theo­re­ti­cal work. Howe­ver, empi­ri­cal fin­dings, that report on chal­len­ges or solu­ti­ons con­cer­ning the app­li­ca­ti­on of arti­fi­cial­ly intel­li­gent tech­no­lo­gies on vul­nerable tar­get groups, are wel­co­med as well.


Baeck, J.-P. (2017, Mai 29). Über­wa­chungs­soft­ware für Geflüch­te­te: Der glä­ser­ne Flücht­ling. Die Tages­zei­tung: taz. Abge­ru­fen von!5409816/

Dat­ta, A. et al. (2015): Auto­ma­ted Expe­ri­ments on Ad Pri­va­cy Set­tings. A Tale of Opa­ci­ty, Choice, and Discri­mi­na­ti­on, In: Pro­cee­dings on Pri­va­cy Enhan­cing Tech­no­lo­gies (1), S. 92–112.

Dau­ten­hahn, K., Neha­niv, C. L., Wal­ters, M. L., Robins, B., Kose-Bag­ci, H., Mir­za, N. A., & Blow, M. (2009). KASPAR – a mini­mal­ly expres­si­ve huma­no­id robot for human-robot inter­ac­tion rese­arch.

Fan­ta, A. (2018, Okto­ber 13). Öster­reichs Job­cen­ter rich­ten künf­tig mit Hil­fe von Soft­ware über Arbeits­lo­se. Abge­ru­fen 23. Okto­ber 2018, von

Gil­ling­ham, P. & Gra­ham, T. (2016): ”Big Data“ in social work: The deve­lo­p­ment of a cri­ti­cal per­spec­ti­ve on social work´s latest ”elec­tro­nic turn“, In: Aus­tra­li­an Social Work, March 2016

Kim, E. S., Ber­ko­vits, L. D., Ber­nier, E. P., Leyzberg, D., Shic, F., Paul, R., & Scas­sel­la­ti, B. (2013). Social robots as embed­ded rein­forcers of social beha­vi­or in child­ren with autism. Jour­nal of Autism and Deve­lo­p­men­tal Dis­or­ders, 43(5), 1038–1049.‑1645‑2 

Man­ze­schke, A., Weber, K., Rother, E., & Fan­gerau, H. (2013). Ergeb­nis­se der Stu­die „Ethi­sche Fra­gen im Bereich Alters­ge­rech­ter Assis­tenz­sys­te­me“ (neue Ausg). Ber­lin: VDI.

Stub­be, J. (2018). Inno­va­ti­ons­im­puls „Inte­grier­te For­schung“. Dis­kus­si­ons­pa­pier des BMBFFor­schungs­pro­gramms „Tech­nik zum Men­schen brin­gen“. Ber­lin: VDI/VDE Inno­va­ti­on + Tech­nik GmbH. Abge­ru­fen von–05-25.pdf

Taye­bi, M. A., & Gläs­ser, U. (2018). Social Net­work Ana­ly­sis in Pre­dic­ti­ve Poli­cing: Con­cepts, Models and Methods (Soft­co­ver reprint of the ori­gi­nal 1st ed. 2016). Springer

Wess­ling, C. (2013, Dezem­ber 17). Smart Home für Senio­ren. Zwi­schen Unter­stüt­zung und
Über­wa­chung. Abge­ru­fen von