When ChatGPT launched in November 2022, I didn't hesitate to integrate it into my classes at FSA ULaval.
Two years later, the AI landscape has exploded with specialized tools that push the boundaries of what’s possible.
I just tested AnswerThis with the following prompt: “Write a 500-word essay with at least one citation, using references less than a year old, on digital marketing and privacy, with references to the work of Stéphane Hamel.”
(No, not an ego trip - just a way to gauge the output’s quality and accuracy. 😉)
The results? Honestly, impressive.
Sure, it’s not flawless, but to the average reader, spotting inaccuracies or gaps would be really tough - if not impossible - and very time consuming.
And the customization options are next-level:
→ Use only academic databases, or mix with web sources (.gov, .edu, etc.).
→ Toggle in-depth research parameters.
→ Get a ready-to-use bibliography in any style (APA, Chicago, you name it).
You can download a polished PDF or Word document. Adjustments? Bibliography tweaks? All ready to go in a few minutes.
But here’s the big question: As someone running 100% online courses, designing assignments or exams that are somewhat AI-proof is now nearly impossible.
Academia is struggling to keep up. Some students are downright ingenious with these tools - kudos to them! (Others… well, let’s call it lazy opportunism or outright cheaters.)
I’m actually reviewing all assignments and exercices for next semester.
What’s your take?
→ Shift the focus entirely to critical thinking and AI literacy?
→ Rethink traditional assessments altogether?
→ Go back to physical exams with pen & paper as some teachers are doing?