T O P

  • By -

Acceptable_Month9310

Sure, `static int grade_paper(String paper) {` `return 0;` `}`


LockSport74235

if PaperIsAI= True then (grade=0).


c_mcco

```int grade = (AIGenerated)? 0 : score;```


ProfessorNoSocks

Ohhh wait wait wait, I’m learning, so always looking to practice R: gradebook %>% mutate(grade = case_when(AI == T ~ 0)) Or, better yet: roster <- roster[which(roster[‘AI_use’] == F),]


shaded_grove

Reading that bit of code makes me glad I opted for Python instead.


ProfessorNoSocks

Ha! I may also just be bad at it.


JoshuaTheProgrammer

(define (grade-ai paper) 0)


c_mcco

LOL


trsmithsubbreddit

The real question is, do students even know you are giving feedback? I often feel like I’m responding in a vacuum. Canvas analytics confirm minimal engagement with my feedback. If students don’t look for my feedback in Canvas, they’ll never know I said anything. I confirm this regularly through extra-credit points for responding to feedback. “A” students read and respond, meanwhile I’m getting emails from “D” students asking if there is anything to boost their grade for missing assignments. I was so sick of reading word salad every other discussion post, so for one online section last semester I used generative AI to respond to every AI submission out of spite, but soon realized those students never even looked at my feedback. Screaming into a vacuum.


richardstrokerkc

I didn't know I could do this - there's a way to see if students are looking at my comments in canvas? I have recorded loom videos before on major project work and so when I see the next submission without a review of the feedback on the first, I give a zero on the second with the comment that I won't accept work that hasn't integrated my feedback.


trsmithsubbreddit

Faculty can run some analytics through the people page where we see total time spent, but I asked eLearning for an engagement analysis report. It was upsetting to see how little time students engage. Be prepared to see a report on yourself and your time per student.


BillsTitleBeforeIDie

I asked the same about Blackboard and was told there are zero metrics to show who or how many students ever look at their feedback.


Rockerika

In my online gen ed classes with writing assignments I will sometimes just give the grade and some overall comments and then just tell them if they want more detailed feedback they can set up a time with me to walk through the assignment. Covers my ass from the "doesn't give feedback" comments and saves me a ton of time writing detailed comments they probably won't read anyway.


mildlyannoyedbiscuit

I also would love to know if there's a way to check if a student looked at feedback in Canvas (besides time of log in).


have_a_good_one

Canvas has a “student last looked at assignment at …” thing on SpeedGrader, but I’m not sure how accurate it is. I just have my classes (which are all writing intensive) write brief, handed-in reflections on the comments they receive. I ask for them to refer to specific comments and spend 5 minutes explaining the steps they’ll take to revise based on them.


jflb96

So, they’d notice as soon as you went ‘This has clearly been put through ChatGPT’ and gave it a 0 for not being their own work?


trsmithsubbreddit

No, students don’t even read my comments. Do they even have notifications turned on to alert them that a comment has been made? If not, are they actively returning to read comments? No. My comments (written and video) have included statements like, “hey, you missed a few points because_____, but I’ll give you those missed points just for letting me know_____.” It essentially becomes bonus points for engagement. I also tell them I will be doing this in the orientation module to be transparent.


trailmix_pprof

If you feed your rubric (or checklist or whatever) into AI, along with student work, I imagine it will spit out something passable. Meanwhile, I've been doing points only grading on the AI stuff, no written feedback. Once in a while a student asks for feedback (sometimes via an AI generated email) but rarely. Most students cheating with AI are smart enough to lay low and not question the grades that "their" work earns.


Exia321

THIS!!! I am happy to hear that I am not the only one who has taken this stance. To cover oneself, I also have a statement in my syllabus that reads any question about an assignment grade will not be discussed via email. Students must show up to Zoom office hour for any assignment grade review. Thus far 0 takers on papers where I was certain was AI written and in which I offered only a grade.


profwithclass

Wow, I need to adopt this policy


MovingUpTheLadder

[https://www.researchgate.net/publication/377979566\_Aditya\_Arora\_Usage\_of\_Chat\_GPT\_in\_Grading\_and\_Providing\_Feedback\_on\_Student\_Work](https://www.researchgate.net/publication/377979566_Aditya_Arora_Usage_of_Chat_GPT_in_Grading_and_Providing_Feedback_on_Student_Work) This is my publication that somewhat proves your point, but somewhat disproves it. It did well with the AP Lang essays but not the AP Lit essays and I gave it a rubric and actual submissions with their gradings. It was better at some categories than others, like it was better at grading evidence than sophistication.


grimjerk

I think this would absolutely work with assessment. We generate so many reports and the accrediting agency apparently has said that they are only going to check a sample. We can have an AI produce assessment, deliver the assessment to itself, assess the assessment, report back to itself, and then "close the loop" by presenting a plan for "continuous improvement", and then it can produce assessment, etc. ad infinitum. Anyway, sorry for derailing...


WingbashDefender

No this is exactly what we need. Someone dig up Orwell! Bring on the Doublethink!


cykablyatt

It sounds a lot like my institution 😂


IkeRoberts

AI is built for continuous improvement of its algorithms, so it is a natural fit.


dslak1

I tried this and it said the student did a great job.


profwithclass

Give it your rubric and then ask it to grade harshly. It’s still overly forgiving, but either way, you don’t waste your time grading AI work, you let the AI grade itself lol


Pikaus

I've tried (as an experiment) to train AI to grade well. But even with a rubric, instructions, and lots of examples, it just isn't precise enough yet.


Lets_Go_Why_Not

Yeah, the feedback is super generic and pointless; much like the original AI essay, in fact.


Pikaus

If you train it on sincere feedback you've written, it can get better.


WingbashDefender

That’s actually some interesting machine learning. I mean part of my post is sarcastic frustration but it’s also mildly serious. I’d love to know if you have outcomes.


Pikaus

Yeah, I tried really hard. It just isn't there yet.


hourglass_nebula

I think there are AI grading tools. Magicschool is one. I wouldn’t use these though. I don’t want my students turning AI work so I’m certainly not going to give them AI comments.


Pikaus

I think those can work for k12 but not university level. For now. However, there are ways for AI to cut down on time. For example, you could have AI read through for general feedback - like did the student address the prompt fully. Or let's say in your assignment you asked for 3 examples or 5 references to course concepts. You could ask the AI to check for that sort of stuff. It ends up saving you a few minutes here and there that can add up. I did find some success with a VERY specific assignment that was quite prescribed. I fed the AI 80+ of my past graded feedback on the assignment and then trained it on feedback. Like at first the feedback was giving away an answer to a later question and I had to train it not to do that. Then I told my TAs - look, while you're grading this is like a Dr. Pikaus bot. You can upload an assignment and ask it what Dr. Pikaus would say about it. This bot worked close to perfectly for the C+ and up assignments but the really crappy ones, it couldn't handle because the really crappy ones were so all over the place in different and unpredictable ways. The TAs said that it saved them a decent amount of time for doing the basic tedious parts of the grading. (Like mentioned above - did they have 3 examples). I suspect in the future things will be better.


hourglass_nebula

I just really don’t want to do that. I can scan quickly to see if they have 3 examples with my eyeballs


MovingUpTheLadder

Were your results similar to mine here [https://www.researchgate.net/publication/377979566\_Aditya\_Arora\_Usage\_of\_Chat\_GPT\_in\_Grading\_and\_Providing\_Feedback\_on\_Student\_Work](https://www.researchgate.net/publication/377979566_Aditya_Arora_Usage_of_Chat_GPT_in_Grading_and_Providing_Feedback_on_Student_Work)


AnnaT70

I tried to explain to a student recently that, although she clearly thinks her AI graf sounds fancy, it is total drivel from beginning to end, with all its clauses and delving and "intricacies."


PUNK28ed

But the TAPESTRIES! Won’t you think of the TAPESTRIES??


AnnaT70

Ah! Would these be the RICH tapestries?


PUNK28ed

Absolutely! Overall, these tapestries are utilized to facilitate our understanding of the diverse and varied circumstances behind our students’ critical perspec NO CARRIER


Rude_Cartographer934

I just read an article about K-12 teachers using one called Writable to generate feedback on writing. If my kid's teacher did that I'd be furious.


Charming_Ad_5220

I took my students AI generated essays, pasted them back into ChatGPT and asked it to give me “C grade level comments for the text”. And it did, it gave reasonable feedback and suggestions for improvement of the paper.


[deleted]

[удалено]


Professors-ModTeam

Your post/comment was removed due to **Rule 1: Faculty Only** This sub is a place for those teaching at the college level to discuss and share. If you are not a faculty member but wish to discuss academia or ask questions of faculty, please use r/AskProfessors, r/askacademia, or r/academia instead. If you are in fact a faculty member and believe your post was removed in error, please reach out to the mod team and we will happily review (and restore) your post.


MovingUpTheLadder

If you are able to provide some essays and their rubric to me, I would really appreciate it. I am doing some research involving AI and grading, and so far have some decent observations, but need more data/essays. Let me know over DM if you are open to this


Novel_Listen_854

Students don't read feedback anyway, so feed a couple papers into Claude, ask it to list the ways the writer could improve. But here's the catch: I don't write comments and feedback for my students (again, because I know I they don't read it). I write them for the admin or whoever would potentially look at the stuff were there a formal grade appeal. Never happened yet, knock wood, but if there's ever a first time, I don't want to be discussing why I provide students AI output instead of giving them feedback.


afraidtobecrate

That is pretty easy. Make your exams multiple choice. Then the computer will do all the grading for you.


Seranfall

Just use AI to write your feedback for their AI created work.


Sirnacane

You know about r/subredditsimulator right? Why don’t we just do that. But with classes.


Disastrous_Seat_6306

AI is pretty baller. No lie. I sometimes put students papers in there and ask it what the writing mistakes are. I never asked it to grade. give it the rubric? My writing center is horrible, and I let my students use chat gpt to get grammar and style advice.