"The top researchers became over 75% more productive. The bottom researchers became less than 25% more productive."
This seems atypical to me. Looking at art and text, AI allows me (incapable of drawing a circle) to produce OK illustrations for my Substack posts. I doubt that it would do much for Michelangelo. On the other hand, it allows my illiterate students to produce what looks like proper text. The only use I have found is to do the same for dot points I input, and even then post-editing is needed to bring it up to scratch, put it in my own voice etc.
Summing up, in most domains, AI makes mediocrity easy, but does nothing for expertise. Looking at your example, I wonder how "top" has been defined. If you mean "top users of AI", then your results are true by definition. But there's no reason to think these were the top scientists before AI, or even after AI. Economics is only dubiously a science, but applications of AI/machine learning I've seen are mostly rubbish and not for fixable reasons like hallucination. Pattern matching can only get you so far
I think you're only considering the case where the AI is making the final product. In this case yes, it's not very good and will only improve on a human if that human is not very good either. But the actual use-case for today's AI is helping with small but tedious portions of larger tasks.
In programming for example, today's AI cannot write a fully-featured application, but it can do small parts of it that are similar to tasks it's seen in its training data, like "write me a function that converts an image to greyscale". It doesn't matter how good a programmer you are, it will always be faster to ask an AI for things like this than to write them yourself. And a better programmer will be faster at spotting bugs in the AI's code, and more able to understand what tasks AI is capable of doing, thus avoiding bugs in the first place.
Our economy and meaning systems should not be co-reliant.
"The top researchers became over 75% more productive. The bottom researchers became less than 25% more productive."
This seems atypical to me. Looking at art and text, AI allows me (incapable of drawing a circle) to produce OK illustrations for my Substack posts. I doubt that it would do much for Michelangelo. On the other hand, it allows my illiterate students to produce what looks like proper text. The only use I have found is to do the same for dot points I input, and even then post-editing is needed to bring it up to scratch, put it in my own voice etc.
Summing up, in most domains, AI makes mediocrity easy, but does nothing for expertise. Looking at your example, I wonder how "top" has been defined. If you mean "top users of AI", then your results are true by definition. But there's no reason to think these were the top scientists before AI, or even after AI. Economics is only dubiously a science, but applications of AI/machine learning I've seen are mostly rubbish and not for fixable reasons like hallucination. Pattern matching can only get you so far
I think you're only considering the case where the AI is making the final product. In this case yes, it's not very good and will only improve on a human if that human is not very good either. But the actual use-case for today's AI is helping with small but tedious portions of larger tasks.
In programming for example, today's AI cannot write a fully-featured application, but it can do small parts of it that are similar to tasks it's seen in its training data, like "write me a function that converts an image to greyscale". It doesn't matter how good a programmer you are, it will always be faster to ask an AI for things like this than to write them yourself. And a better programmer will be faster at spotting bugs in the AI's code, and more able to understand what tasks AI is capable of doing, thus avoiding bugs in the first place.
> difficulty 3of adapting
Typo