He did talk a bit at the beginning about politics, but it was mostly just raid gameplay. If this actually works to get young voters I will officially be an out-of-touch old person.
The direction that AI has gone has been so disappointing to me. I minored in Stats in college (not too long ago) and took quite a few courses on AI/ML stuff, but all of our conversations about AI/ML were focused on what I saw as pretty positive and practical applications. Things like being able to conduct better surveys/polling, learning to recognize and decode ancient languages in research, sorting numerical data and eliminating hours and hours of tedium, etc. I haven’t followed AI and all of its developments that closely in the 5-6 years since and… I don’t really know where I’m going with this but I can’t believe AI has 1) completely penetrated the national discourse 2) done so in the dumbest way 3) everything I was mildly excited about is now just a footnote of a footnote of a footnote in the discourse. Also there was that one tweet/quote that was like “AI was supposed to free creative people from tedious tasks but instead it has made tedious people feel as if they are creative,” which, yea
Whatever happened to blaming the weather on the whores and homosexuals?
One of my students this semester was very resistant to trying to write without an assistive tool that rhymes with Crammerly. His justifications were that (a) he doesn’t believe he can write without it and (b) he had a subscription. I told him not to use it without meeting with me to show me how he’d use it. He never did that, and I think he’s still using it.
These tools are marketed as handling all the tough parts of writing so you can do what you want to worry-free. My problem is that all those tough parts are also where all the creativity happens. Focused attempts at writing with revision and feedback is one way people learn what they usually label as “creativity.” Try something, see if it works. Learn the general domain of the area you write in. Get a feel for writing conventions and when to break them. Learn your writing voice.
Instead, you can click a button to “improve” your writing (YouTube).
First, I’m not convinced the result is all that much better. It changes some things that aren’t wrong (“confirms our existing beliefs or hypotheses” → “supports our preexisting beliefs or hypotheses”), and it substitutes at least one awkward turn of phrase with another (“We inherently have …” → “As humans, we…”). If a reader or a consultant were working with the student, they’d probably focus on making changes to fit their context while developing their own style. But the student in the ad, and students using these tools, likely aren’t learning to distinguish between useful clarifying changes and this kind of paraphrase-to-the-mean.
Second, that kind of tool also takes students out of the process. For one, they can plagiarize. Do some patchwriting, spin the text into a paraphrase, and go. The input doesn’t have to be the student’s own writing. That’s obviously an extreme, but even the students trying to use it responsibly become passive recipients of tool-based feedback, rather than learning how to make decisions about feedback.
In convincing people to miss all that work, they are seeking to make people more tedious. I wish we could get back to ideas like cracking untranslated languages.
The experience of learning to write at a collegiate level is something I truly cherish, to this day. In high school I always got good grades on my essays and didn’t realize how boring and bad my writing was until my first English professor gave me some feedback. People don’t want to take the time to reread and revise their work, but that’s actually the fun part if you are actually interested in writing well.
I agree with both of you @Hunter and @Taliesin_Merlin. I got into the field of ML and AI through the engineering side so I’m more of a specialist in the plumbing of the pipelines that lead up to the ML/AI models than the actual models itself, though I learned a more about the Data Science side of the equation both out of personal interest and to understand the Data Scientists better who depended on my work. I also took some Machine Learning classes back in university.
When I got into the field it was all very exciting and new to me. And back then there was a lot of talk about all different kinds of useful applications. Of course in online retail you don’t save any lives but you can make the shopping experience for people better by providing good recommendations and the like. At the time all the models were still mostly predictive and rarely generative except for some crappy early chat bots etc. I thought about a lot of things that could be improved with the well-thought-out addition of ML/AI features.
Where we’re at right now it’s disheartening to see that a lot of the applications of genAI are the most uninspired blockchain-level non-features we’ve ever seen. And I’m still baffled by how readily people eat up that slop.
It’s like we gave silicon valley a genie lamp that allowed them to wish for anything they could think of. Advancing the field of medicine (earlier and more reliable recognition of tumors etc.), automating dangerous jobs done by humans, making our lives better. And when they got their hands on that lamp they said: “You know what we should automate first? Creativity, self-expression and art! The worst part of my job is coming up with ideas of my own. I hate that! You know what I really love? Telling other people what to do. But people are annoying and don’t always obey, so how about we replace those with an AI? That way I don’t have to think AND don’t have to pay any wages.”
I have to remind myself that while the internet is becoming increasingly flooded with AI-generated rule 34 images of also AI-generated characters from gambling mobile games owned by giant corporations, the good applications also happen. But they do so more quietly and - at least sometimes - less obnoxiously. I had the opportunity to get an AI-based skin-cancer screening in addition to having the doctor check themselves. That’s cool! More of that please. It’s hard not to be bitter about it all but I feel like just quitting that field leaves it to the grifters and that’s also not really a valid option. So I persist, but with less youthful enthusiasm, begrudgingly trying to help steer this burning ship into a better waters.
Since I’ll teach some Web Technology and ML at university this winter semester I’ll try to sprinkle in ample food for thought on the importance of thinking creatively for oneself while using these technologies ethically. I can only hope it’ll sink in.
is this distinct from most kids already not paying attention in Language Arts class and trying to do the bare minimum, and then retaining little of anything of said minimum onward into adult life or is it more the contemporary manifestation of the same thing. I think it’s a bummer btw just curious about that question
I think the fact that a lot of the billionaires and business leaders of our time come from backgrounds that have a long history of disparaging any field of study but their own is very much an evolution of what you’re describing.
I’m more of an arts and languages kid that semi-accidentally ended up in computer science and the number of variations of “if it’s not math it’s basically worthless” I’ve heard over my lifetime is too high for someone who’s only 33.
Maybe the useful applications can lean more into the “machine learning” label to distance themselves from “AI” or come up with some other label.
My perception of the tech world is that it is just roach infested riddled with mostly dudes who do not want to do any real work most of the time. They saw everyone get rich at google and the like and now they think this is our chance to do it too with minimal effort.
The perplexing thing to me is how basically everyone older than me at work and at conferences and all that are just so desperately gobbling it all up. Did you learn nothing from techs failures in the last two decades?! (Yes they learned nothing) Do 50% of your children wish social media didn’t exist and that it’s ruining their lives? (According to some recent reporting yes!)
they kind of have to push the “creativity” angle bc all those useful applications are for smaller and specialized markets. The only way they can scare up enough revenue/the plausible appearance of revenue is getting saps to do email and app stuff I suspect
It’s a race to the bottom chasing standardized test scores and beefing up resumes for college apps, it was only a matter of time for AI to be exploited by students/parents chasing the magic elixir to ensure access to a “good college” at the expense of any critical thinking or creativity. It was crazy to me as a college senior helping new freshman draft resumes for internships and seeing how full of stuff they had vs myself at the same stage (or even as a senior!).
Success has been rewarded without crediting the process and it’s left things pretty sad and dire imo. Every time I mention an interest or hobby it’s always “how are you going to monetize it?”. Cost of living is so high, everyone feels stressed/anxious about life in general, using any trick to try and get ahead is the fight/flight response of the modern aspirational middle class person imo. Another symptom of our collapsing capitalist system.
1.) i believe writing is thinking and style is communication, so it is bleak to see the degradation of language and the “humiliation of the word.” however, this decline has been going on for quite some time, so this might only be an acceleration. in fact, i think a large reason these tools are so popular is because people do not care about writing or language.
2.) that said, most of these kids will unfortunately be fine using these tools in the short term. nearly everyone i know in the white collar workforce is using chat gpt or an equivalent to write their emails and messages, and most of the emails i receive have that gpt reek. i think this will have more disastrous consequences for their “career” as it erodes their ability for thought, but i think the general opinion right now is writing is menial work getting in the way of making money or something.
3.) on the topic of money, i wonder about the material angle. people use chatgpt et al because it’s cheap and in most cases free. we know open ai is a company that burns money faster than fuel, so what happens when it’s not cheap to use gpt anymore? will it still have value for these people? will we go back to regular writing? will they have become “addicted” and continue to pay up (this is the investor hope), or will they use a waterfall series of weaker, less expensive generative tools that erode things even further?
I think the hope for point 3 is to do a good enough job proving out the viability of your AI system that Apple/Amazon/Google/Microsoft buys you out and rolls you into their stuff. Everything gets monopolized, the CEOs and VCs get their money, the grunt engineers get the can, shareholders get value.
microsoft has more or less bought open ai and they are still burning money like crazy. they were recently evaluated at $157 billion but only because they needed to raise another $7 billion to keep the lights on. they don’t make money, and it’s not clear how they can make enough money to justify the immense costs of a computer generating pictures of an ak47 shaped like a hamburger or giving me the wrong answer when i ask for the closest emergency room nearby. i’m not optimistic enough to think it’ll be curtains for the industry just yet, but the fundamentals don’t make sense. this is not even to mention the environmental costs which translate to $$$ costs in terms of energy. i wonder how much nootropics were in the room when microsoft decided to reopen a nuclear plant for their ai computing?
there are some money men who are skeptical, i expect there will be more, but the whole thing makes me feel insane.
it’s not a question of talking good, it’s what people value interpersonally. There’s an incentive to offload emails and essays to chatgpt, but that incentive isn’t there when actually talking to another person