Zooze the Horse roams around the pasture near Lamar State College. Zooze thinks about problems in academia. Zhe wants proffies to submit posts (blog posts, not fence posts).
Monday, July 28, 2025
Reader's Comment on NYTimes Guest Essay
Carol Baker, PhD
With AI, students can earn a college degree without learning.
--Reader's comment on a Guest Essay titled, "I Teach Creative Writing. This Is What A.I. Is Doing to Students."
Thursday, July 24, 2025
Sunday, July 20, 2025
Kenyon College graduates win 7 Fulbright Fellowships [ knoxpages.com ]
Seven Kenyon graduates have been awarded prestigious Fulbright fellowships, continuing the College’s legacy of success with the international academic exchange program.
Kenyon has long been a liberal arts leader in producing Fulbright scholars, and earlier this year it was recognized for the number of applicants it had selected for the 2024-25 student scholar program.
Kenyon has received the “top producer” designation 18 times in the past 20 years.
The Fulbright U.S. Student Program, sponsored by the U.S. Department of State’s Bureau of Educational and Cultural Affairs, provides funding for students and young professionals seeking graduate study, advanced research and teaching opportunities worldwide. . . .
The article:
Friday, July 18, 2025
Thursday, July 17, 2025
The End of Cheating As We Know It [ Michael Wagner ]
Thursday, July 10, 2025
‘It’s just bots talking to bots’: AI is running rampant on college campuses as students and professors alike lean on the tech [ Fortune ]
The article highlights a controversy at Northeastern University where a student demanded a tuition refund after discovering her professor used AI tools like ChatGPT to generate lecture notes without disclosing this to students. The incident underscores the shifting dynamics in higher education regarding AI, as students express concerns over transparency while educators navigate the challenges of integrating AI into their teaching practices.
The article:
What Happens After A.I. Destroys College Writing? [ The New Yorker ]
On a blustery spring Thursday, just after midterms, I went out for noodles with Alex and Eugene, two undergraduates at New York University, to talk about how they use artificial intelligence in their schoolwork. When I first met Alex, last year, he was interested in a career in the arts, and he devoted a lot of his free time to photo shoots with his friends. But he had recently decided on a more practical path: he wanted to become a C.P.A. His Thursdays were busy, and he had forty-five minutes until a study session for an accounting class. He stowed his skateboard under a bench in the restaurant and shook his laptop out of his bag, connecting to the internet before we sat down.
Alex has wavy hair and speaks with the chill, singsong cadence of someone who has spent a lot of time in the Bay Area. He and Eugene scanned the menu, and Alex said that they should get clear broth, rather than spicy, “so we can both lock in our skin care.” Weeks earlier, when I’d messaged Alex, he had said that everyone he knew used ChatGPT in some fashion, but that he used it only for organizing his notes. In person, he admitted that this wasn’t remotely accurate. “Any type of writing in life, I use A.I.,” he said. He relied on Claude for research, DeepSeek for reasoning and explanation, and Gemini for image generation. ChatGPT served more general needs. “I need A.I. to text girls,” he joked, imagining an A.I.-enhanced version of Hinge. I asked if he had used A.I. when setting up our meeting. He laughed, and then replied, “Honestly, yeah. I’m not tryin’ to type all that. Could you tell?”
OpenAI released ChatGPT on November 30, 2022. Six days later, Sam Altman, the C.E.O., announced that it had reached a million users. Large language models like ChatGPT don’t “think” in the human sense—when you ask ChatGPT a question, it draws from the data sets it has been trained on and builds an answer based on predictable word patterns. Companies had experimented with A.I.-driven chatbots for years, but most sputtered upon release; Microsoft’s 2016 experiment with a bot named Tay was shut down after sixteen hours because it began spouting racist rhetoric and denying the Holocaust. But ChatGPT seemed different. It could hold a conversation and break complex ideas down into easy-to-follow steps. Within a month, Google’s management, fearful that A.I. would have an impact on its search-engine business, declared a “code red.”
Among educators, an even greater panic arose. It was too deep into the school term to implement a coherent policy for what seemed like a homework killer: in seconds, ChatGPT could collect and summarize research and draft a full essay. Many large campuses tried to regulate ChatGPT and its eventual competitors, mostly in vain. I asked Alex to show me an example of an A.I.-produced paper. Eugene wanted to see it, too. He used a different A.I. app to help with computations for his business classes, but he had never gotten the hang of using it for writing. “I got you,” Alex told him. (All the students I spoke with are identified by pseudonyms.)
He opened Claude on his laptop. I noticed a chat that mentioned abolition. “We had to read Robert Wedderburn for a class,” he explained, referring to the nineteenth-century Jamaican abolitionist. “But, obviously, I wasn’t tryin’ to read that.” He had prompted Claude for a summary, but it was too long for him to read in the ten minutes he had before class started. He told me, “I said, ‘Turn it into concise bullet points.’ ” He then transcribed Claude’s points in his notebook, since his professor ran a screen-free classroom. . . .
The article:
Friday, July 4, 2025
In California, Colleges Pay a Steep Price for Faulty AI Detectors [ Undark ]
The flava:
It has been more than two years since the release of ChatGPT created widespread dismay over generative AI’s threat to academic integrity. Why would students write anything themselves, instructors wondered, if a chatbot could do it for them? Indeed, many students have taken the bait, if not to write entire essays, then certainly to draft an outline, refine their ideas or clean up their writing before submitting it.
And as faculty members grapple with what this means for grading, tech companies have proved yet again that there’s money to be made from panic. Turnitin, a longtime leader in the plagiarism-detection market, released a new tool within six months of ChatGPT’s debut to identify AI-generated writing in students’ assignments. In 2025 alone, records show the California State University system collectively paid an extra $163,000 for it, pushing total spending this year to over $1.1 million. Most of these campuses have licensed Turnitin’s plagiarism detector since 2014.
That detector first became popular among professors when the internet made it easy for students to copy and paste information from websites into their assignments. In the AI detector, faculty members sought both a way to discourage students from using ChatGPT on their homework and a way to identify the AI-generated writing when they saw it.
But the technology offers only a shadow of accurate detection. . . .
The article:
Tuesday, July 1, 2025
Subscribe to:
Posts (Atom)