17 Comments
Jun 24, 2022Liked by Philosophy bear

It's amusing that even the worst answers still read like something that a human could have written.

Expand full comment
Nov 28, 2022Liked by Philosophy bear

my favorite part of this is that robots are less robotic than non-robots

the robot seems pretty indecisive, though. non-robots stick to their opinions. maybe too much.

Expand full comment
Jun 24, 2022Liked by Philosophy bear

One thing to note: GPT-3 is, by this point, old hat. It's well behind the state-of-the-art. The only thing that it has going for it is that it's got a widely accessible user-facing website that pretty much anyone can use. I agree that the plagiarism problem is something that teachers will need to grapple with (and that I expect many teachers to be far too slow with), but we're not too far off from a better model being available, and it'll undoubtedly fix some of the flaws.

Also, I've noticed the "one the one hand, on the other hand" type of thing before, GPT-3 does it a lot. A student who's even mildly clever will just remove the hemming and hawing part that comes before the actual argumentation. For the political science question, it becomes quite a bit stronger by removing everything before "generally speaking". This is very typical for GPT-3, since it seems to know the basic arguments but likes to equivocate.

Expand full comment
Jun 24, 2022Liked by Philosophy bear

Interesting; humans will be outsourced. Robots will do all the manufacturing while AI will do all the admin.

Expand full comment

So, I guess this means that in a century's time, students will still be taking exams in a hall on pen and paper?

(Only half joking)

Expand full comment

AI detection tools can be easily bypassed through quillbot and deliberate typo errors. If the instructor accuses the student of plagiarism using AI, he/she must be well equipped with the proper evidence to prove it. Right now, GPTZero is the best tool for this but is not yet generally accepted as the gold standard for detecting AI generated content. Using this tool as evidence in front of the Dean when the student's parents complain might be a headache and could cost one's career. What I think the solution is, is to embrace the technology and develop GPT based pedagogy. The teacher posts a question prompt and instruct students to use chatgpt to answer the question. After the first response, they are now responsible to ask more deeper questions and at the end of the session, shares their insights to what they discussed by chatgpt. This is a better strategy. We teach students to ask better questions rather than memorize text.

Expand full comment

Regarding the question on math nature, I have not seen anything there that would explain why set theory (and its infinities) would be special w.r.t. math nature. Your comment below that question contains your own opinions on it that you attribute to the machine. I've seen such an approach elsewhere too, people trying to extract absent rationality from AI outputs by providing their own opinions.

Expand full comment
Jun 24, 2022·edited Jun 24, 2022

Probably there's an argument that you shouldn't warn students about this because that'll encourage them to do it, or to do it and make minimal changes to ensure they have deniability.

I'm unsure of how reproducible GPT-3 outputs are. How likely is it that it gets flagged as plagiarism of a previous GPT-3 submission??

Expand full comment