17 Comments
Jun 24, 2022Liked by Philosophy bear

It's amusing that even the worst answers still read like something that a human could have written.

Expand full comment
Nov 28, 2022·edited Nov 28, 2022

To pass the Turing test a certain percentage of people have to be fooled by the AI. But that doesn't take into account whether or not people are getting dumber than when the test was initially conceived.

Expand full comment
Nov 28, 2022Liked by Philosophy bear

my favorite part of this is that robots are less robotic than non-robots

the robot seems pretty indecisive, though. non-robots stick to their opinions. maybe too much.

Expand full comment
Jun 24, 2022Liked by Philosophy bear

One thing to note: GPT-3 is, by this point, old hat. It's well behind the state-of-the-art. The only thing that it has going for it is that it's got a widely accessible user-facing website that pretty much anyone can use. I agree that the plagiarism problem is something that teachers will need to grapple with (and that I expect many teachers to be far too slow with), but we're not too far off from a better model being available, and it'll undoubtedly fix some of the flaws.

Also, I've noticed the "one the one hand, on the other hand" type of thing before, GPT-3 does it a lot. A student who's even mildly clever will just remove the hemming and hawing part that comes before the actual argumentation. For the political science question, it becomes quite a bit stronger by removing everything before "generally speaking". This is very typical for GPT-3, since it seems to know the basic arguments but likes to equivocate.

Expand full comment
author

Yep, agree. This is going to be outmoded soon, and we're not even ready for it.

Expand full comment

Is a student using a language model to make raw material for their essay a plagiarist? Some of the arguments seem better than those produced by a bored or disinterested student even verbatim. Moreover, many people find it easier to edit a rough draft than to write it in the first place. It seems to me that letting GPT-3 generate the skeleton of a response, which is then edited into a final answer, would be a perfectly rational way to work for all but the best students. Should this be encouraged or frowned upon?

Expand full comment
author

I think there's an argument to be explored that encouraging students to work with it in the right way might work better than trying to ban it, similar to arguments over drug prohibition. I honestly don't know.

Expand full comment
Jun 24, 2022Liked by Philosophy bear

Interesting; humans will be outsourced. Robots will do all the manufacturing while AI will do all the admin.

Expand full comment

So, I guess this means that in a century's time, students will still be taking exams in a hall on pen and paper?

(Only half joking)

Expand full comment

AI detection tools can be easily bypassed through quillbot and deliberate typo errors. If the instructor accuses the student of plagiarism using AI, he/she must be well equipped with the proper evidence to prove it. Right now, GPTZero is the best tool for this but is not yet generally accepted as the gold standard for detecting AI generated content. Using this tool as evidence in front of the Dean when the student's parents complain might be a headache and could cost one's career. What I think the solution is, is to embrace the technology and develop GPT based pedagogy. The teacher posts a question prompt and instruct students to use chatgpt to answer the question. After the first response, they are now responsible to ask more deeper questions and at the end of the session, shares their insights to what they discussed by chatgpt. This is a better strategy. We teach students to ask better questions rather than memorize text.

Expand full comment

OK, so I have a company that sells A papers in any topic to a student of any grade level.

A company thinks its going to stop me with a plagiarism detector.

I have my AI generate and publish all possible research articles, starting with things actually covered in a classroom, until its pumping out papers on things coming into existence in real time.

Sounds harder than it really is. Say you have to write 20 pages on thermodynamics - Professors don't just say that - they too have a prompt they give students, there are reasons they give the prompts, etc.

I would also do this with stuff that is more just luck, until I nailed it and spent tons of money making sure everyone knows about it - stuff like, a newspaper cover story, word for word - completed months ago, so the journalist actually plagiarized my AI, even tho they just did human stuff, went to work, wrote an article about a thing that happened...

This is kindergarten level stuff. I could literally be comic book super villian, shortly. I'm no dummy but I don't have much for resources. Certainly, something like this is already underway by people with resources. Resources is all you really need - money buys brains.

So, just let the kids use GPT to write. God, just let them do with it what they will. We may find our problems solved by some 7 year old - all just bc they had access to the tools of the time and place to which they were born, honestly I think full, unfettered access to AI is now a humans birthright. I feel quite strongly about infringing on a fundamental birthright, especially for something as dumb as "keep it harder for students to cheat" - ridiculous.

I DON'T know that it would be better they actually know how to formulate words together in sentences the way it is done now. That wasn't true about anything my math teacher "taught" me in 7th grade. Had they just accepted calculator tech - maybe we could have learned how to use them really well and maybe had time for something useful like how formulate a calculation of capital gains.

This is all a joke. I, right now - in my 30s, am revaluating so much directly bc of HOW this happened. The thing most on my mind this week - what is creativity even? Like, seriously - I mean, it CAN'T be what I thought it was bc the AI is so good being creative - and before you explain like I'm 5, I know how they work, I have no confusion about these being super software, I mean they are obviously creative tho

Pfft, I can't even believe people are worried about cheating - its like talking about the mop water needing to be changed on the Titanic.

Expand full comment

Haha, this was a year old.

Sorry, for that rant - I'm obviously losing patience with for this whole conversation, at least as discussed now.

I've never commented on your blog before but I've been here several times before - I appreciate your content and I love the name 💯😁

Expand full comment

Regarding the question on math nature, I have not seen anything there that would explain why set theory (and its infinities) would be special w.r.t. math nature. Your comment below that question contains your own opinions on it that you attribute to the machine. I've seen such an approach elsewhere too, people trying to extract absent rationality from AI outputs by providing their own opinions.

Expand full comment
author

Yes I broadly agree, which is why I hedge it with "To the extent that I can make sense of its argument".

It frames it:

"Cantor's work showed that there are different types of infinity, and that some infinities are bigger than others. This might suggest that mathematics is discovered rather than invented, because it seems like there are certain mathematical truths that exist independently of us. "

So we have

"Some types of infinity are bigger than others" ergo "Might suggest mathematics is invented rather than discovered.

What we are missing is a reason why one suggests the other. GPT-3 is silent on this, although I might follow-up a few possibilities here in my own work later.

In truth there's nothing obviously particular to set theory which makes it more likely to be discovered, not invented or vice versa. I only asked for it to use *examples* from set theory, not to show that set theory had some special properties in this regard. In making it's case with reference to Cantor it at least cleared this pass.

It's work here is of a high pass or low credit standard.

Expand full comment

To hopefully show more clearly what (and why) I mean, putting in a link to a math discussion (not by me) with GPT-3: https://twitter.com/boazbaraktcs/status/1536169934300057602

That shows that these conversation systems produce layers w/o doing actual reasoning, as it neither recognized that it was a false question, nor it recognized where it lacked an argument.

Well, if instead of marking it how it does reasoning, you mark it how it pretends to do reasoning, then it is s/t different.

Expand full comment
Jun 24, 2022·edited Jun 24, 2022

Probably there's an argument that you shouldn't warn students about this because that'll encourage them to do it, or to do it and make minimal changes to ensure they have deniability.

I'm unsure of how reproducible GPT-3 outputs are. How likely is it that it gets flagged as plagiarism of a previous GPT-3 submission??

Expand full comment
author

It's possible but on my understanding:

-You'd have to have temperature set to ensure no randomness

-It would have to be the exact same question

So a clever student would get around it trivially by modifying the question or not setting the temperature to no randomness. But it's unlikely a student would be smart enough to change the randomness setting, but dumb enough not to think of this particular risk? Only a handful if any will be caught this way.

Expand full comment