I enjoyed this interview a lot. I particularly enjoyed the bit about uncovering anonymous or pseudonymous writers. I wonder if this sort of thing can build psychological profiles of people based on there social media or predict a person’s risk for suicide.
Psychological profiles based on linguistic features is a dicey proposition, for the same reason machine learning interacts poorly with race - there are enough other causal factors at play that influence a person's writing style that make it difficult to say something about *the person* without accidentally slurping in a lot of assumptions about their particular demographic profile. That said, this sort of work has been used as an early-warning indicator for diseases like dementia, when you're able to compare someone's writing style over time.
> That said, this sort of work has been used as an early-warning indicator for diseases like dementia, when you're able to compare someone's writing style over time.
Well, that sounds fairly awesome! (I would ask for links, but I haven't done the requisite 2 mins of googling.)
Also, I just clicked on the link to [the machine-assisted short fiction piece* you participated in creating] "The Center for Midnight." HYSTERICAL. I often love reading this stuff!
'Minerva Black specialized in the cultural and physical production of irreverent embroidery. “I know so much, but I really want this to be for anyone." '
And then the "protest embroidery" (YES!) projects: "depicting classical Greek figures in modern settings: Persephone at the supermarket; Hades shopping at Sears."
'...the filmmaker Benjamin John O’Toole was producing his documentary, The Now Without Humiliation, ONE. FRAME. at. a. TIME.
By 1965, he had completed 23 SECONDS of footage.'
I'm dyin'! Mirror, indeed. (caps and punctuation mine.)
Two thoughts: 1. Is there a list somewhere of what artists' biographies (and other special content) the RNN was trained on? 2. "Minerva Black" - sounds like some AI has been sippin' from a font of Harry Potter fanfic! (just kidding - ALLLLLL the text-generation AI's have been. okay, okay, that's hyperbole!)
* Put that descrip in for anyone who reads my comment and is like, "wut? I don't see 'The Center for Midnight'!"
Thanks! I really don't remember exactly what corpus we trained the model on, but I wrote up a description of the process here, which includes a few more details: https://scottbot.net/center-for-midnight/ ("a bunch of biographies from the Harlem Renaissance, a corpus of stand-up comedy scripts, and the collected biographies of art collectors")
THANK you for your discussion of privacy in the final interview, btw, Scott! It inspired me (or was "one more drop of input" to inspire me) to check out a piece of software I've been intrigued with for awhile. Also, one of my family members has been encouraging attention to privacy strongly - and I want to get "on board" with that - but he often seems like a lone voice (among a bunch of complacent people).
> "but I wrote up a description of the process here, which includes a few more details: https://scottbot.net/center-for-midnight/ " <--Thanks - I totally checked that out! Sounded ridiculously fun.
Also, there was something I was just "itching to say" ...I often wish to see AI's make INTERESTING category errors that spark creativity in humans. Like once I asked a small child what were some things he would like to own, and he listed things that included "foam wind." (One of my friends has said of her 6-YO "he has NO idea what's out there..." [in consumer-land] "...I asked him what he would like as a gift and he said, 'maybe a set of Egyptian Scales?' " Personally, I think it's based on him seeing a reference to "The Book of the Dead" in the Egypt exhibit at the Field Museum! If possible, the capability of metaphysically weighing a human heart is absolutely the most practical thing I could think of!)
"Protest embroidery" is not technically a category error, but it's got this sense of, "well obviously nobody would ever do that, b/c the essence of protest is its immediacy, and the creation of embroidery is a protracted, meticulous process." But it's fun to imagine... a young woman reads that story (or generates a story that contains the idea of "protest embroidery") and then thinks about it a bit, goes to sleep and has a dream... in her dream, she's a woman in a different century - hundreds of years ago - in a culture where it is functionally IMPOSSIBLE to speak up about many things. But she can embroider.. in fact, abundant productivity in this realm is enthusiastically encouraged, as it adds status to the wealthy family dynasty she married in to! So in her dream, she works out the subversiveness and the powerful potential to save a life & a mind of... (one instantiation of) protest embroidery. [Sorry if this comment is a bit over-the-top; hope there's something in there you enjoyed!]
Oh! This reminds me of something. I remember reading on some website - probably for an AI Assitant* - a disclaimer about not using it in mission-critical apps. And a cynical piece of my brain immediately muttered, "Right! Like suicide prevention."
Incidentally, not many months later, I ended up hearing about an AI app for suicide prevention! A young woman giving an informal talk on mental health (via Discord) listed some things you could use "and here's an app, it's AI..." and you kinda "check in," have a short dialogue.
Yet I viewed THAT implementation of "AI for mental health and suicide prevention" positively! Because the person is being pro-active, logging how they're doing each day, responding to questions generated by an AI. (Also, intentionally engaging with an AI - probably not trying to seek out a human just then.)
RE: software that predicts a person being at risk for various things- I'm intrigued by a piece of software called "Smoothwall Monitor." (not including link b/c any time someone writes this kind of thing it sounds spammy!) It was something I came across when looking for solutions for content filtering at home. (right now, for us, that's mostly ublock origin, which is free.)
* The tagline was "your next employee!" or something. I think it was especially for those customer-response chat-bots on websites, except likely with a wider scope. (Sometimes I investigate technologies I don't particularly anticipate using for myself soon!)
(Also, "mission-critical" was not the right word/concept at all! More like "critical applications that someone's life is depending on in an emergency.")
I enjoyed this interview a lot. I particularly enjoyed the bit about uncovering anonymous or pseudonymous writers. I wonder if this sort of thing can build psychological profiles of people based on there social media or predict a person’s risk for suicide.
Psychological profiles based on linguistic features is a dicey proposition, for the same reason machine learning interacts poorly with race - there are enough other causal factors at play that influence a person's writing style that make it difficult to say something about *the person* without accidentally slurping in a lot of assumptions about their particular demographic profile. That said, this sort of work has been used as an early-warning indicator for diseases like dementia, when you're able to compare someone's writing style over time.
> That said, this sort of work has been used as an early-warning indicator for diseases like dementia, when you're able to compare someone's writing style over time.
Well, that sounds fairly awesome! (I would ask for links, but I haven't done the requisite 2 mins of googling.)
Also, I just clicked on the link to [the machine-assisted short fiction piece* you participated in creating] "The Center for Midnight." HYSTERICAL. I often love reading this stuff!
'Minerva Black specialized in the cultural and physical production of irreverent embroidery. “I know so much, but I really want this to be for anyone." '
And then the "protest embroidery" (YES!) projects: "depicting classical Greek figures in modern settings: Persephone at the supermarket; Hades shopping at Sears."
'...the filmmaker Benjamin John O’Toole was producing his documentary, The Now Without Humiliation, ONE. FRAME. at. a. TIME.
By 1965, he had completed 23 SECONDS of footage.'
I'm dyin'! Mirror, indeed. (caps and punctuation mine.)
Two thoughts: 1. Is there a list somewhere of what artists' biographies (and other special content) the RNN was trained on? 2. "Minerva Black" - sounds like some AI has been sippin' from a font of Harry Potter fanfic! (just kidding - ALLLLLL the text-generation AI's have been. okay, okay, that's hyperbole!)
* Put that descrip in for anyone who reads my comment and is like, "wut? I don't see 'The Center for Midnight'!"
Thanks! I really don't remember exactly what corpus we trained the model on, but I wrote up a description of the process here, which includes a few more details: https://scottbot.net/center-for-midnight/ ("a bunch of biographies from the Harlem Renaissance, a corpus of stand-up comedy scripts, and the collected biographies of art collectors")
THANK you for your discussion of privacy in the final interview, btw, Scott! It inspired me (or was "one more drop of input" to inspire me) to check out a piece of software I've been intrigued with for awhile. Also, one of my family members has been encouraging attention to privacy strongly - and I want to get "on board" with that - but he often seems like a lone voice (among a bunch of complacent people).
> "but I wrote up a description of the process here, which includes a few more details: https://scottbot.net/center-for-midnight/ " <--Thanks - I totally checked that out! Sounded ridiculously fun.
Also, there was something I was just "itching to say" ...I often wish to see AI's make INTERESTING category errors that spark creativity in humans. Like once I asked a small child what were some things he would like to own, and he listed things that included "foam wind." (One of my friends has said of her 6-YO "he has NO idea what's out there..." [in consumer-land] "...I asked him what he would like as a gift and he said, 'maybe a set of Egyptian Scales?' " Personally, I think it's based on him seeing a reference to "The Book of the Dead" in the Egypt exhibit at the Field Museum! If possible, the capability of metaphysically weighing a human heart is absolutely the most practical thing I could think of!)
"Protest embroidery" is not technically a category error, but it's got this sense of, "well obviously nobody would ever do that, b/c the essence of protest is its immediacy, and the creation of embroidery is a protracted, meticulous process." But it's fun to imagine... a young woman reads that story (or generates a story that contains the idea of "protest embroidery") and then thinks about it a bit, goes to sleep and has a dream... in her dream, she's a woman in a different century - hundreds of years ago - in a culture where it is functionally IMPOSSIBLE to speak up about many things. But she can embroider.. in fact, abundant productivity in this realm is enthusiastically encouraged, as it adds status to the wealthy family dynasty she married in to! So in her dream, she works out the subversiveness and the powerful potential to save a life & a mind of... (one instantiation of) protest embroidery. [Sorry if this comment is a bit over-the-top; hope there's something in there you enjoyed!]
> "...or predict a person’s risk for suicide"
Oh! This reminds me of something. I remember reading on some website - probably for an AI Assitant* - a disclaimer about not using it in mission-critical apps. And a cynical piece of my brain immediately muttered, "Right! Like suicide prevention."
Incidentally, not many months later, I ended up hearing about an AI app for suicide prevention! A young woman giving an informal talk on mental health (via Discord) listed some things you could use "and here's an app, it's AI..." and you kinda "check in," have a short dialogue.
Yet I viewed THAT implementation of "AI for mental health and suicide prevention" positively! Because the person is being pro-active, logging how they're doing each day, responding to questions generated by an AI. (Also, intentionally engaging with an AI - probably not trying to seek out a human just then.)
RE: software that predicts a person being at risk for various things- I'm intrigued by a piece of software called "Smoothwall Monitor." (not including link b/c any time someone writes this kind of thing it sounds spammy!) It was something I came across when looking for solutions for content filtering at home. (right now, for us, that's mostly ublock origin, which is free.)
* The tagline was "your next employee!" or something. I think it was especially for those customer-response chat-bots on websites, except likely with a wider scope. (Sometimes I investigate technologies I don't particularly anticipate using for myself soon!)
(Also, "mission-critical" was not the right word/concept at all! More like "critical applications that someone's life is depending on in an emergency.")