NLP researchers be like “old school” work and cite a 2023 paper.
Zana Buçinca
PhD Candidate @Harvard; Human-AI Interaction
Posts
@andresmh@hci.social well, well I stand corrected.. I haven’t heard them all :)
By now, I must have heard every joke there could ever exist when CS or HCI people struggle to make a projector work.
“How many CS people does it take …”
“Talk about more HCI research ..”
“Maybe we need an AI …” …
@andresmh@hci.social 2011!! Our emails were ahead of their time!
I am the first person from Kosovo to do a PhD at Harvard, but my most impressive statement remains “I went to school and was friends with Dua Lipa”.
Spending a weekend reading tens of papers in another field, just to add one single sentence to my own: Decades of research in [this field] have found [X, Y, Z].
AI assistance is impacting our decisions and the quality of our work. But how will this assistance affect us -- our skills, our growth and improvement, enjoyment, collaboration, or our agency in the workplace? The current design of AI assistance does not consider human-centric objectives; we need methods to account for them.
We propose offline RL as an approach for optimizing such human-centric objectives in AI-assisted decision-making.
Link to preprint: http://arxiv.org/abs/2403.05911
There’s more time to submit to TREW!
Due to popular demand, we’ve extended the submission deadline to March 1st.
NLP folks are migrating to CHI. And so are ML practices. Just reviewed a paper that bolded their highest scoring intervention — no stats.
New study on Lab in the Wild: Given descriptions of people, can you recommend exercises that suit their constraints, goals, preferences, and needs?
Compare yourself to other test takers!
http://ai.labinthewild.org/exercise-recommendation-sets-optimal/?v=v0&PLATFORM=litw
Anticipating harms of AI systems is a critical yet challenging task. We introduce AHA!, a generative framework that given a description of an AI system generates examples of possible harms. To generate examples of harm, we systematically consider stakeholders and AI problematic behaviors and narrate those via vignettes. These vignettes are then fed to the crowds and an LLM for surfacing specific harms that can occur to different stakeholders.
(1/2)