Working right up until the last second, the admin elves at the University of Not-Bielefeld published a set of 11 recommendations shortly before the Christmas break to advise its teaching staff in how to deal with generative AI in the classroom. And then took a well deserved seven-week break before facing the even more arduous task of translating it into English.
It was all worth the wait …
Given the sheer number of brilliant, academic minds devoted to this problem (the Vice President of Student Affairs, the Department of Student Affairs, and the Council of the Deans of Students), you’d expect something insightful, inspiring, and cutting edge to guide us. But, as with most things done by committee, the end product was instead a near-sighted, dispirited, and blunted piece of buzzword bingo that, as always, left all the real work to us:
- stay informed
- foster and nurture students’ competencies
- acknowledge and value
- review and adapt
- sanction violations
- define and communicate
But, most importantly, they got to use the word “overarching” by cramming their 11 recommendations into four overarching areas.
(Now if there’s one word that deserves to be eliminated from the English language—together with those people inclined towards using it—it’s “overarching”. The problem is that you expect something along the lines of the Arc de Triomphe when you hear it, but usually get the Golden Arches served up to you instead. (Which is not intended as a slight against McDonald’s. At all. Love it or hate it, it’s still fast food and not a surprisingly French-made architectural masterpiece.) Call it overarching themes, overarching concepts, or overarching whatever-you-wants, they’re all just categories and the use of the word is always much more of a case of overreaching than it is of overarching. This is especially true here. Eleven recommendations dissected into four areas means that some of the latter just barely fit the definition of a category.)
And then there’s the escape clause in which they note that the Recommendations are merely a “product of their time” and so subject to change as generative AI develops. Not only that, but that they also welcome any comments or suggestions to help develop “additional action-oriented recommendations and offerings.”
C’mon. Really?
It took them over a year since ChatGPT and generative AI shattered our illusion of safety to come up with a document as bland, useless, and action-oriented as tapioca? And also one to recommend pretty much what we’ve all been doing while waiting for their received wisdom to rear its ugly, administrative head?
(Why ChatGPT is almost universally vilified for starting the whole AI crisis is beyond me. Listen to the media and you’d think that it’s been systematically destroying humanity in Skynet-like fashion since it came online. (BTW, that’s ChatGPT, not the media, destroying humanity. But it’s a reasonable misunderstanding to make. Interestingly, ChatGPT went global almost precisely 25 years and three months to the day after Skynet became self-aware. Some say it’s a coincidence …) It’s not like ChatGPT invented AI or even a dangerous form of it. The social-media companies with their “algorithms” were way ahead of them there. But somehow their algorithms remained just plain ol’ algorithms and didn’t become dangerous AI. What many forget is that generative AI needs humans to make it dangerous. Just ask the poor guy in Hong Kong who wired $25 million dollars to scammers after they deepfaked him out with a video conference call including the company’s CFO. So, seriously, just how dangerous is generative AI really going to be in a teaching setting, apart from the fact that it’s hard to detect? At worst, it’s merely yet another form of cheating that we are being forced to detect. More charitably, it’s Google with a summarize function. Or, in other words, Wikipedia.)
In any case …
Those suggestions they wanted? How about actually offering something, like access to a tool that will help us teachers to recognize AI-generated content instead of recommending that we “inform” our students to adhere to “good academic practice” by “acknowledging” when they use AI and otherwise “sanctioning violations” that we can’t even detect? That’ll stop any cheating attempts …
Just for fun, if not to “stay informed” about generative AI, I ran the Recommendations through GPTZero, one of many web services that are designed to detect AI-generated content and that are also not mentioned in those same Recommendations. Interestingly, GPTZero was only moderately confident that the German version was written by a human (77%) but highly confident that the translated English version was (87%). Not sure what this says exactly about the Vice President of Students Affairs et al., but it might be worth considering packing some Linda Hamilton type heat, if not a few Turing tests, to the next committee meeting in case they really are cyborgs. (Or just shoot first and ask the questions later. You can never be too careful.)
Even more fun: I compared the English version of the Recommendations with the DeepL translation of the German version using the university provided access to the PlagScan software (which now finally works). The end result of 19.6% of the content being either word-for-word identical or comprising slight textual changes is not surprising given that we are talking about two translations of the same source document. However, DeepL is the acknowledged translation engine of choice for central admin, being used to generate the English translations in their bilingual e-mails for instance. More to the point, it also uses generative AI for its translations such that using it and not acknowledging that fact runs counter to the Recommendations (running because they’re action-oriented, remember?) as a violation of “good academic practice”. Now, I’m not necessarily trying to imply anything but those bilingual e-mails (including the one announcing the Recommendations) no longer mention the use of DeepL and I somehow doubt that all those admin types have suddenly become fluently bilingual.
Or at least bilingual to the point where I as a native speaker can’t tell that a non-native English speaker has written the text …

