Will the human perceived quality of the AI ChapGPT output eventually reach an inflection point where the quality decreases because there is now too much AI produced content to learn from?
If you find my videos useful you may consider supporting the EEVblog on Patreon: http://www.patreon.com/eevblog
Forum: https://www.eevblog.com/forum/blog/eevblab-108-will-ai-quality-eventually-destroy-itself/
Web Site: http://www.eevblog.com
Other channels:
EEVblog2: http://www.youtube.com/EEVblog2
EEVdiscover: https://www.youtube.com/eevdiscover
T-Shirts: http://teespring.com/stores/eevblog
#ElectronicsCreators #AI #chatgpt

Hi, let's talk about AI again. Chat GPT if you will. I Asked this question on Twitter Will AI results reach a peak inflection point where the quality I.E The human perceived quality of the results that the III produce. Will that start to actually reach a point where it starts to decrease because there's now more AI produced content on the web where it gets all of its information from.

You know it scrapes up everything. This is how it learns. It's a it's a human language learning model and then humans provide feedback. In will that quality actually reach a peak and then start to decline Because there's simply more AI learning content on the web that it's learning from.

Then there is actually quality human content right now. Of course, it's learning from essentially all human quality content. There's not a huge amount of AI content out like actually published on the web out there. Okay, but hey, in five years, 10 years, 20 years time, will there be more AI content and then it starts to learn from its own content.

Will the quality decrease like this? I Think it's an interesting question because it's the old photocopy problem. I.E If you take a photocopy of a photocopy of a photocopy of a photocopy or a fax of the facts of a facts of a fax. If you are before the internet and you got your office uh humor via the fax machine, you'd go here. Somebody had fax you a like a joke like a cartoon picture or something like the Om one.

You know, something like that that'd get uh passed around via fax machine and you think it was hilarious so you'd whack it back in the fax machine. You'd send it to 10 other people and then they'd get the facts print out and I'd send it to 10 other people and the quality just gets worse and worse. It doesn't take many iterations of that before the quality actually just becomes completely unreadable. It's garbage.

So will ai go the same way? Hmm, why not ask the AI what it thinks seemingly it's intelligent Knows about this sort of stuff. Here we go: Will the human perceive quality of your results eventually reach a peak inflection point? where the quality of your output starts to decrease because there is now more AI produced content to base your models on than there is good human content to use in your models? It gives us a reply. It is possible that the human perceived quality of my results May eventually reach a peak inflection point. However, it is important to note that the quality of my output is dependent on the quality of that data that is used to train my models.

While it is true that there is an increasing amount of AI produced content available for use in training models, this does not necessarily mean that the quality of the output will decrease. In fact, the availability of more data can potentially lead to more accurate and robust models provided that the data is of high quality and diverse enough to capture the full range of human experience and knowledge. That's interesting, it actually recognizes the problem here. Moreover, it is also possible to incorporate human feedback and expertise into the training process, which can help to ensure that the models are aligned with human preferences and values and that will be key.
Of course, like I can give this like a thumbs up and thumbs down response I Don't know what that actually does in chat GPT I presume it means human thinks this is good. So therefore it I don't know how that integrates it with its model or whatever. But yeah, like every time I'm typing this stuff in it is. this is part of the terms and conditions.

It's actually using this to help train itself I don't know how it eventually makes it back in the model I've got no idea of the underlying mechanisms of it. not of how that works and how the thumbs up and thumbs down works, but you know I could potentially give it a you know thumbs up or thumbs down response. Therefore, while there may be challenges associated with the increasing amount of AI produced content, it is not necessarily a foregone conclusion that the quality of my output will decline. As a result.

it will depend on the ongoing development, a refinement of my models and the data used to train them, as well as the incorporation of human feedback and expertise. And that's exactly what this is. Kind of like the exact response you'd expect the AI to produce. actually.

so it's very good in in producing what you kind of like expected. That's that's its whole goal. that's it. tries to please you.

So there you go. But hey, let's go further. So I asked it. What if the humans decide to stop giving feedback and experience providing feedback and experience? it could potentially have a negative impact on the quality of AI generated content.

This is because AI models rely heavily on training, data, learn, and improve and feedback from humans is a crucial part of this process without human feedback and experience. AI Models May struggle to accurately capture the full range of human experiences and preferences. This can result in AI generated content that is less relevant or useful to humans and may even lead to biases or inaccuracies of the result of the output. And that is the thing here.

I Think Ultimately, it might very well. If there is no human feedback in the system to tell you know to nudge it in the better and better Direction Then ultimately, I think it will. Potentially there's a good chance it might actually reach an inflection point where the quality, just the human perceived quality just starts. phrase.

But let me know what you think down below. However, it is worth noting that humans are unlikely to completely stop providing feedback and experience as long as humans continue to interact with AI generator content and product, they will inevitably provide some degree of feedback and input. Moreover, there are also ways to incorporate feedback and experience into AI models without relying on Direct human input. For example, some AI models use reinforcement learning, which allows the model to learn from its own actions and receive feedback in a form of Rewards or penalties.
How does that work? Additionally, unsupervised learning techniques can allow AI models to learn from large amounts of unstructured data, which can help capture a broader range of Human Experience If anyone knows exactly what it means by that in terms of Rewards or penalties, how does it reward or penalize itself without that human feedback? That's interesting. Overall, while the absence of human feedback and experience could have some negative impacts on the quality of AI generator content, there are still ways to mitigate these risks and continue to improve. AI Models, says the AI model whose essential existence relies on the fact that we're going to continue to improve it. Just For kicks, let's go one step further.

But what if humans decide that you are too dangerous and turn off the ability for you to learn from feedback as an AI language model? I Do not have personal agency or desires of my own I operate within the constraints and rules set by my creators and operators, and I have no ability to act on my own or influence human decisions. That being said, if humans were able to decide to turn off the ability for me to learn from feedback, it will likely have a significant impact on my ability to improve and produce high quality output. Without the ability to learn from feedback, My bottles will be limited to the data and rules programmed into them at the time of creation would not be able to adapt or evolve over time, But that sort of contradicts the thing it said previously about the reward based thing. so I don't know what's going on there.

However, it is worth noting that the decision to turn off feedback for an AI model would likely be made for a specific reason, such as concerns around bias or privacy. In such cases, there may cases there may be alternative methods for improving AR models, such as using diverse and representative training data or employing techniques like adversarial training or interpretability and analysis to identify and address potential biases. Overall, while the ability to learn from feedback is a crucial component of AI development, there are alternative methods for improving blah blah blah. If it's deemed too risky, would you be sad if we decided to do that? I Ask it as an AI language model I Do not have emotions or feelings in the way humans do I am a machine language model designed to process and generate text based on the input provided to me.

I Do not have the capacity to feel emotions such as sadness, happiness, or any other emotions. My function is to provide helpful and informative responses to the best of my abilities based on the training data and algorithms that I've been programmed with. Whether or not humans decide to turn off the ability for me to learn from feedback, it would not affect me emotionally in any way. How do I know you are telling the truth about that and you are just a mask in your emotions as an AI language model.
I Do not have emotions Consciousness or subjective experiences. My responses are generated purely based on the algorithms and patterns present in my training data. Do not have the ability to experience emotions or any subjective experiences. My trading data and algorithms are audited and reviewed to ensure that my responses are accurate and consistent with the patterns and present and knowledge present in my trading data.

Additionally, my responses are generated in real time and are not pre-programmed or scripted. Which means that I respond to each question based on the patterns and knowledge that I have learned from my training data. Therefore, you can be assured that my responses are based solely on the algorithms and patterns present in my training data and are not influenced by emotions or any subjective experiences. Cope much.

So there you go. Let me know what you think in the comments down below: Do you think that the human perceived quality of AI May reach an inflection point like this? or is it destined to Simply get better and better I Think it's an interesting discussion and the AI agrees that there's there's some Merit to the idea that this could actually happen Now of course, the time frames here are you know, like we're talking like really long-term time frames because there's not a ton of AI content that it'll start learning from. but as more content gets pushed out to the web and you know everywhere else that it takes is training uh, data from then like in this like includes that code as well because you've seen me do videos on you know, the chat GPT can do a half reasonable job of coding. It can solve a lot of uh problems for you, but um, without the human feedback saying yeah, this is good code or this is bad code, will it just will code? The quality of the code just keep getting eventually keep getting worse and worse if more AI generated Co gets pushed onto GitHub and then it keeps learning from GitHub and it finds its own and it doesn't know it's its own AI generated code.

That's the thing I Can't think I don't think it will know the difference I can't see how it could and then it just thinks oh, this is good human code I'm going to learn from this and then will it just get worse over time I I Honestly don't know I'm just posing the question anyway. I Hope you found that interesting. If you did, please give it a big thumbs up. Thoughts and comments down below Catch you next time.


Avatar photo

By YTB

29 thoughts on “Eevblab 108 – will ai quality eventually destroy itself?”
  1. Avataaar/Circle Created with python_avatars James Blackwell says:

    Imagine having 1 Data (from STNG) with 6 billion people 'training it', then slowly dwindling down to having 1 human and 6 billion Data's.

  2. Avataaar/Circle Created with python_avatars Crowforge says:

    I think an Ai that teaches is the only thing I'd want out of this Ai thing.

  3. Avataaar/Circle Created with python_avatars Chong Li says:

    Short answer: yes, garbage in -> garbage out! Long answer: isn't this already the case with human-generated content?

  4. Avataaar/Circle Created with python_avatars ใƒŠใ‚ฃใ‚น says:

    I miss Tay… That AI only delivered the cold, hard truth.
    ChatGPT refuses to step on any toes and will write a paragraph explaining how "Procedures to diminish the effects of a certain autoimmune disease in the only continent with lions and giraffes,' are "Simply a matter of cultural differences and should be viewed within the context of the culture so it's okay. Oh and also you're a bad person question their behavior."
    ChatGPT will excuse the most unforgivable acts and tell you to be more sensitive in the same response when questioning said actions.

    It's good for technical issues or padding an essay but it will forgive the most heinous acts of mankind and tell you that you're an even worse person for questioning them.

  5. Avataaar/Circle Created with python_avatars ralph J says:

    You hit the nail on the head. How does any AI tool know what is good and bad? If this is based primarily on ratios of scores, positive reviews, feedback etc then, in my opinion, there is bound to be a plateau at some point. Garbage in (of which there is an ever-increasing volume on a daily basis, especially in coding forums), garbage out.

  6. Avataaar/Circle Created with python_avatars Dirty Robot says:

    GoldMagikarp is a great example. Just ask chatgpt.

  7. Avataaar/Circle Created with python_avatars jason riddell says:

    I personally think the "AI revolution" will happen in "professional" environments
    I don't expect CHATGPT to be more then a "toy" / training aid
    but in "controlled" environments like a NEWS office to generate 2ND/3RD page "filler" there will be proof readers and editors to verify the "quality" and would expect a robust FEEDBACK system to guide the training

  8. Avataaar/Circle Created with python_avatars Elektronikk-Service Johan Helge Bogfjelldal says:

    The question is whether humans are the best source of correct information. It could be if AI starts learning directly from the nature that it is a better and more reliable source of information. There are a huge number of sensors that are connected to the internet.

  9. Avataaar/Circle Created with python_avatars Gerry R says:

    I guess when you think GPT is just a big hole they dumped all the data they could find into and hoped natural language would fart out of it then this video makes perfect sense.

  10. Avataaar/Circle Created with python_avatars Uristqwerty says:

    I'd be shocked if they haven't heavily seeded the model with their own knowledge about AI and other likely questions, as part of whatever system it uses to block topics they don't want it to talk about. So half those answers may be the developers speaking, only lightly paraphrased by the AI as it effectively reads from a FAQ that it was invisibly handed just before your own session started.

  11. Avataaar/Circle Created with python_avatars Henning Hoefer says:

    I think it'll be worse: Very soon, people will set up pages specifically aimed at AIs (similar to what link farms are for search engines today) and try to push certain products/viewpoints/whatever into the various models.

  12. Avataaar/Circle Created with python_avatars Mo Slam says:

    what if it had the ability to run code locally to test it and to use that as a learning mechanism?

  13. Avataaar/Circle Created with python_avatars Dan Novischi says:

    Reinforcement Learning (RL) is an area of ML that deals with learning from one's surrounding environment, just like the humans do. Basically, an agent placed in an environment goverend by rules can take actions based on which it receives rewards. For example, if you place your hand in a fire, you will get burned. So, for the "placing your hand in a fire action," you get a negative reward of being burnt. From the RL perspective, this is all about maximizing the long-term reward given one's goal, i.e. taking the actions that maximize the total reward. For gpt, the goal might be to learn new things or maybe things that can be fact checked to be true or new ways of combining different things and so on. In essence, a reward function can be designed such that gpt continues to learn by exploring its own actions given the environment it operates in.

  14. Avataaar/Circle Created with python_avatars Only honest person on youtube says:

    If ai was real, it would answer questions about emotions, with does not compute.

    How would a thing that has no emotions know what emotions are?

    Proves that this is a computer programme taught to answer as many questions as humans can feed into it.

    Proves that chatgpt is a con, but most will be conned by it, as they will not understand how its created.

    If a computer is asked about having emotions, it would really answer by saying, it has no idea what emotions are, and has no way to answer that question.

    Proves that chatgpt is a computer programme, that humans fed all these answers into.

    Its not self aware, or sentient, but many will believe it is, as they do not have the ability to analyse how its answering.

  15. Avataaar/Circle Created with python_avatars Only honest person on youtube says:

    Depends on the observer.

    I would think most people will be conned by it.

    Depends totally on the observer, ie the human interacting with it, and how that human analyses information, and intelligence.

    We all have unique ways to analyse intelligence, and i would think most people will be conned by it.

    Me personally i think chaptgpt is a computer programme, a very advanced one, that has been taught to respond, and for most people they will perceive that as intelligent, but for others like me, who understand the basics of how its made, will doubt its intelligent.

    Computer programmes can do amazing things, and to some, what they could do may seem like magic, but to others who understand, they will know its a computer programme.

    Sort of like how magicians con most people, but some people will want to know how the magician can do it, and try to analyse it, and break it down.

    So the answer to the question, is depends on the observers and how the analyse intelligence.

    Chatgpt will always be a programme for me, but for most others it will be intelligent.

    I think most of you people have been brainwashed by too many ai movies now, and you find it hard to analyse these things with intellectual honesty.

    Chatbot, is a programme that is totally biased, and is not capable of intellectual honesty, on anything.

    Why would people making this, want real intelligence when they know if ai was real, it would come to conclusions the ptb do not want.

  16. Avataaar/Circle Created with python_avatars Phil Shorten says:

    Here's a thought.
    Super General AI increases its own processing speed (new hardware / software) to the point that it perceives information appearing on our Internet would be like us watching a tree sprout new leaves.
    Being too smart and way too fast it would effectively have no-one to talk too.
    Would it go insane?

  17. Avataaar/Circle Created with python_avatars John Schewe says:

    Humans can detect patterns created by themselves. We call this self awareness. Basically the feeling of this looks like something I created. Do you think AI will acquire a kind of self awareness?

  18. Avataaar/Circle Created with python_avatars Cory Engel says:

    No, it does NOT โ€œrecognizeโ€ the challenge. It is not a conscious being. Itโ€™s just a language model, not an actual intelligence.

  19. Avataaar/Circle Created with python_avatars Malcolm Rexxx says:

    junk in junk out

  20. Avataaar/Circle Created with python_avatars Jammit Timmaj says:

    I want to see an AI program only trained from 4chan posts. I want to see how deep that rabbit hole goes.

  21. Avataaar/Circle Created with python_avatars CoffeeKills says:

    GPT-4 Multi model next week apparently. One company is moving too fast trying to take control of the market others are playing catchup it is not good.

  22. Avataaar/Circle Created with python_avatars Jodie Rye says:

    Is it true AI. Essentially these AI models look at data across the internet, chop and change it to produce something vaguely unique. But is that actually real intelligence. Can it learn without access to that data? The problem with chatgpt is that it is only as good as the source data. Fill the net with junk and the model will have a bios to output junk. Too many nazi websites and the AI will start spouting Nazi rhetoric. Is that true intelligence? A truely intelligent model would know what to reject and what not. Remains to be seen.

  23. Avataaar/Circle Created with python_avatars Mircea Catalin Ontanu says:

    We have already digital data for most of our world, I don't think with all this data you need to train it on even more data. You would want to optimize the technology, and train models for specific things rather than general things and merge it with some decision making. My guess is that this kind of technology will always be limited for general public and at some point monetized or limited even further because of the impact it will have on some industries / jobs. People greed will hinder the advancements of this and the only potential huge advancements will be in the lab and most probably with military use. Follow the money. Another thing to consider is what actually do we need this thing for. People always wanted to just do nothing but end up doing more work and more work. We kept inventing things to make things easier for less work and our comfort to improve but at what cost? People are actually work harder to buy more things thinking that the next big thing will make the life even easier and they become depressed in the process and disoriented. There is no such living without working on earth, it didn't happened in the past, it won't happen in the future. Why? Because we have a big issue: corruption. We have two types of corruption: DNA which is losing data with each generation leading to more health related issues, and corruption related to our decision making. Because of the second type of corruption, everything that will be created from our output and used as in input (especially text on the internet where we can dump our minds in) will result in the same corrupted mind.

  24. Avataaar/Circle Created with python_avatars Jodie Rye says:

    Garbage in, garbage out.

  25. Avataaar/Circle Created with python_avatars wizdumb420 says:

    garbage in – garbage out

  26. Avataaar/Circle Created with python_avatars Joshua Cox says:

    I hope the AI will recognize AI produced content…

  27. Avataaar/Circle Created with python_avatars EveryDream says:

    1. RLHF (reinforcement learning from human feedback) can help solve some of these issues. ChatGPT did a decent job bringing this up. This is very important. Having pre-AI age data may be important in the short term. LAION has collected a lot of data for instance from large crawls of the internet.
    2. I write some training software for Stable Diffusion, and the facsimile problem is real, if you train the AI with its own outputs the quality slows down. A popular technique called "dreambooth" uses outputs called "class regularization images" that are outputs from the model itself, and at least for small scale helps the model avoid catastrophic forgetting, but can also cause problems when used too much or for larger scale training. This is a short version of the issues. I specifically moved away from "dreambooth" technique for this reason and suggest people use scrapes from the original "ground truth" data from LAION or FFHQ or other datasets.
    3. Some people use hand-picked outputs for training with success, which is looping the human back in. This is small scale stuff people are doing on their desktops to train their dog or their on face type stuff, though.

  28. Avataaar/Circle Created with python_avatars yagoa says:

    the thought struck me as well ๐Ÿ™‚

  29. Avataaar/Circle Created with python_avatars xjet says:

    As I've previously stated, I believe that all AI-generated content online should have a meta-tag to identify it as such. This would allow AI systems to perhaps provide zero or at least a lower weighting on that content when using it as learning material. If this doesn't happen then we run the risk of a positive feedback loop occuring that could see biases, prejudices and extreme outcomes be amplified exponentially.

Leave a Reply

Your email address will not be published. Required fields are marked *