David explains PID controllers.
First part of a mini-series on control theory.
Forum: http://www.eevblog.com/forum/blog/eevacademy-6-pid-controllers-explained/'>http://www.eevblog.com/forum/blog/eevacademy-6-pid-controllers-explained/
EEVblog Main Web Site: http://www.eevblog.com
The 2nd EEVblog Channel: http://www.youtube.com/EEVblog2
Support the EEVblog through Patreon!
http://www.patreon.com/eevblog
Donate With Bitcoin & Other Crypto Currencies!
https://www.eevblog.com/crypto-currency/
EEVblog Amazon Store (Dave gets a cut):
http://astore.amazon.com/eevblogstore-20
T-Shirts: http://teespring.com/stores/eevblog
๐ Likecoin โ Coins for Likes: https://likecoin.pro/ @eevblog/dil9/hcq3
First part of a mini-series on control theory.
Forum: http://www.eevblog.com/forum/blog/eevacademy-6-pid-controllers-explained/'>http://www.eevblog.com/forum/blog/eevacademy-6-pid-controllers-explained/
EEVblog Main Web Site: http://www.eevblog.com
The 2nd EEVblog Channel: http://www.youtube.com/EEVblog2
Support the EEVblog through Patreon!
http://www.patreon.com/eevblog
Donate With Bitcoin & Other Crypto Currencies!
https://www.eevblog.com/crypto-currency/
EEVblog Amazon Store (Dave gets a cut):
http://astore.amazon.com/eevblogstore-20
T-Shirts: http://teespring.com/stores/eevblog
๐ Likecoin โ Coins for Likes: https://likecoin.pro/ @eevblog/dil9/hcq3
Hello everyone, let's talk about the coolest topic ever. Maybe basic control theory. And what we're going to do is we're going to talk about PID controllers and introduce the most fundamental concepts of control theory. This is the start of a few control theory videos, so any feedback would be really appreciated.
So let's start. So control theory is something fundamental to most things, but it isn't always on the surface evident. Rockets are controlling for their their pitch and yaw and other variables for stability, ovens control for their temperature and cars, control for all kinds of things. But one of the one of the obvious examples is cruise control which controls for speed even you in your day-to-day activities, a basic controller or at least a whole ton of them when you walk to the shop.
Let's say this guy here Bob Welcome to the shop with his GPS He gets his coordinate from his geocaching GPS It just gives you a coordinate and he knows the coordinate of the shop and he can subtract the to get the distance and that distance to the shop is the error which is basically the fundamental concept of control theory. Control theory is controlling and minimizing error. So there's a few things about error. Error is the set point which is where you want to be minus the feedback which is where you are and this is an example with the position, but it doesn't have to be the case.
It could be with speed or it could be with acceleration or magnetic field, strength or energy power. All kinds of variables can be controlled using the minimization of error or control theory. An error of zero would often be the position called the steady state position and it is usually the goal in cases where it's not the goal. I'm sure they'd be very happy with zero error anyway.
after getting back from the shop Bob realized that he didn't far too much chocolate, felt guilty, and bought a treadmill. So on the treadmill, he realized this is another element of control theory. He's controlling the speed of his feet. But what is he actually controlling? What is he looking to do? Well, he's not actually controlling his speed, he's controlling his position.
He wants to be in the center of the treadmill. and if he starts to riff back, you know he's not going fast enough. And if he starts drifting forward, you know his. He knows he's going too fast.
So that is. If his actual position is greater than his desired position, then he is. He's got an error less than zero. If his actual position is less than his desired position, then his err is greater than zero and he should speed up.
So what do we do with this error? Well, we use this error to control effort. and effort is basically how much effort Bob's putting on on the treadmill. If he's not putting on in enough effort, he'll fly off the back and hit his head if he puts too much effort in. He runs into this stand thing here at the front, which so control theory is evident in basically all elements of things you do from running to walk into your body, heat regulation. And while it may not be the PID controller or any like formally represented mathematical controller, it is explainable through basic control theory concepts. Most control theory textbooks and lectures have diagrams like this throughout the entire book. This is a basic diagram which represents the two previous examples where the set point is the position Bob should be and the feedback is the position Bob is. He gets that feedback from his eyes and he uses his legs to move him to different positions and his brain controls his legs and he looks at the error to make decisions with his brain.
So if we go through that like this, you see why. It's called a control loop. Because it's a loop. it just goes round and round and round.
And for control loops in digital systems, it's usually a periodic loop. It goes round once every second, ten milliseconds something like that which then makes very fast and sensible hopefully adjustments to the system so that the system is controlled. When Bob was walking to the shop, he is his position was going up the whole time, but his error was going down and that is what we want to do. We want to minimize error.
So up the top here his position is equal to the initial error. If you look over here and his error is actually the distance to the shop initially. So in this case, the error units are in the same units as the position, which is which is much simpler than it could be because sometimes you have to convert the units so that they equal each other and they're comparable. So one of the most common industry controllers is the PID controller.
That's the Proportional Integral derivative controller. and these things are found everywhere. from motherboard fan controllers to heaters, to balancing Segway things air. and there are all kinds of things.
and they're quite a primitive controller, but they're really easy to tune and pretty fast to get up and running, so we're going to talk about that. Okay, so a PID controller can actually be separated into three different controllers: a proportional controller and integral controller and a derivative controller. We're going to talk about a proportional Controllers behavior. A proportional controller is actually analogous to a mechanical spring where the default position is when the spring isn't stretched or compressed and that default position would be the controller's setpoint.
If the controller the the spring was stretched, then the distance between the setpoint and the default point is the error. This is the error. When the error is negative, the force is negative. The controller outputs a force.
In this case, when the error is positive, then the force is positive trying to move it back to its set point. Now you'll notice that there's an equation up this top corner here, and this equation should be quite familiar to many of you. It's the exact same form as Hookes law where Hookes law is F equals K Delta X and Delta X And this is really the difference of positions, but Era is a more general way of expressing differences of position. It's just the difference of an arbitrary unit which isn't necessarily distance. So a proportional controller is basically an ideal spring. Analytical controller can be used in lots of things and it's especially useful when you need to have era eventually go to zero. And these things, like there are examples like precision ovens and anything that balances anything. All kinds of things require zero error because if there is any error, it ends up not performing to spec or ending up halfway across a continent if it was a plane or just it just missing its target.
So Integral control is one of the more useful controllers for in scenarios where you need zero error. Now we're going to talk about an oven controller. Ovens aren't usually controlled with integral controllers. They they probably should be controlled with a thing called a bang-bang controller because that is the scenario where bang-bang controllers are basically optimal.
But but for this scenario, because it's simple, we're gonna use an integral only controller. bit. weird, but whatever. Anyway, so in this example, we're trying to heat up this oven and we're gonna do it manually because it's not a very smart up and we just have this knob which you can turn.
But we've put a thermometer in the oven and we want to make sure that we're cooking our chicken at Nope. Apparently I'm cooking it at 10 degrees because that's what meant what says we're cooking our chicken at 10 degrees. All right. Anyway, so initially the ovens temperature is zero because someone put it in the fridge and that would mean that the error is 10 the setpoint - zero the actual point.
So that's here. Now if you get the area of this error here, then that is equivalent to the integral up to time one. So our effort would be K times the integral we just had before and in this case the integrals result is just 10. And for simplicity, we're just going to have the constant out the front as 1.
but this can be anything so. initially our control effort U is zero because we haven't actually started the controller. But after time one, we're able to calculate our new control effort which will be 10. We then move forward the system by one more second and the oven has heated up a little bit because we have put a little bit of energy into the heating elements.
Okay, so after the first sample, we're up to this sample - we're taking a reading at the second second. We now we've realized that the oven has gone up two degrees and that would lower our error by two. So it was initially 10 degrees and 10 minus 2 is 8. Now, integrals add up areas, That's how they work and so the the result of the integral in this case would be 10 plus the area here, which is 8. So our next control effort for the next period between 2 & 3 will be 18. and if you follow this through for a little while, the oven heats up more because we're putting more effort in. We got five and we add the 5 to the eating and that results in a control effort of 23. 23 is the the area here plus area here, plus the area here and the oven is heated up even more at this point, and the integral component is actually getting quite large and noticed there's nothing driving the integral components smaller.
That's very important and it's one of the biggest pitfalls of integral controllers as we'll see in a moment. So the ovens temperature is now 9 degrees. We have one more degree to go, so the error is one and we just add that to the result. So that's 20 for the effort.
and now we have all that accumulated area here and we're still probably heating up the oven because even if you've dial it up to 11, then it takes ages to heat up. There is thermal mass of the oven that takes a lot of time to heat up and that means there's a delay. The integral component has a very high value. Now we're up to we're up to 24 and that means we're going to overshoot our our setpoint.
So in the next sample we unfortunately go to negative. We end up having an error of negative 4 because they've overshoot our temperature. we have 14 degrees and 10 minus 14 is negative 4. and if you add these areas to the negative 4 area here, then we end up putting an effort of 20.
The slight reduction in effort because this oven has basically no thermal Mouse apparently results in the error reducing a little bit and that again lowers our control effort to 19. And after the next sample, we've finally got zero error. But that's not the end of the story because that control effort is still greater than zero. So if it does converge to a value above zero, it needs to keep heating, but it should be stable at some point and it isn't yet.
At the next sample, we notice it's gone up to one and that area adds to 20 and at that point, the ovens temperature is stable. Apparently it's the fastest heating oven in the world. It's heated up in eight seconds, so really great. Notice that the waveform here ends up going like this a little bit and that that trough there here is called overshoot and this is something very.
This is a problem that integral controllers face. They have a thing called integral wind-up and this is the building up of the this area here behind the current reading and there are lots of ways to deal with this. Many controllers just say you will never have more integral wind-up than value X. It will just limit the the possible integral value.
So if your limit was was 20 then this would just limit it to this and it would never go above 20. We've talked about proportional controllers and integral controllers and from proportional controllers we get our response speed, how fast the system will respond, and from integral controllers. Hopefully we get zero error. So why would we want a derivative control? It sounds like we've got what we want. Well, not quite. In the scenario where you've got a steady-state system, it's reached steady state and we've got this nice solid line. You want to be able to reject disturbances and what is the disturbance? Well say someone sneezes in front of a pendulum and then the error goes Like with an integral controller. It takes a while for the system to build up any control efforts so the integral controller doesn't really respond well in this scenario at all.
With a proportional controller, it does respond, but proportional controllers don't. They don't have the response speed before the the value ends up getting large. A proportional controller is just the constant times the error. So what do we do? Well, what we can do is take the derivative.
That means the rate of change and what is the rate of change. When the someone sneezes in front of the pendulum, it's very high, so a very high rate of change. We get this nice opposing. Well, it could be depending on the sign of the constant, we get this nice opposing force that as soon as the disturbance occurs, we can resist it and move back to our set point and allow the integral controller to resume its normal business.
Now there are problems with derivative controllers really big problems and these are mainly the limitations of sensors and the digit discretization and all kinds of things like that mean that any noise on the line, you end up taking the rate of change of noise, which ends up being well, pretty damn noisy. So you keep putting control effort in which which is from the noise and you don't really want to be doing this this sort of thing because it ends up causing kind of like a instability in the system, makes the system very noisy and audibly noisy. In the case of pendulums and in the case of cruise control, the vehicle would be like shaking or something. An example of this equation is actually in the damper.
A damper is something that absorbs energy and it stops endless oscillations in things. And this is actually the same form the equation is in the same form as the damper equation. That's the equation that represents this mechanical element and these are this is a fundamental element in car suspension. If you just had a spring, you'd be bouncing all over the place.
If you add the damper, you can make sure that your car kind of smoothly smoothly copes with shock as it with disturbances before. The other thing that the derivative controller does is absorb energy. So it does resist change in any scenario including change which results from control effort. So it has its pitfalls.
that is, that can be a pitfall. The derivative component can also help to reduce the settling time of a system. This is improving the settling time. This is because the derivative component, it acts against change of error. So if the system has reached its set point, then it's going to resist any change away from that set point. and this includes overshoot. So this is the equation which might be scary to some many that represents a PID controller. The proportional controller added to the integral controller added to the derivative controller.
and now this form isn't what you see in control textbooks very often because the form here is actually easier to work with with. a process called Tuning Genie is the process of improving the constants in front of the controllers, the the proportional, integral, and derivative controllers. to improve the response of the system you might want it to respond faster to disturbances. You might want it to reach steady state faster, You might want it to be slower.
and this form makes tuning a bit easier because it allows you to use a process called Zeigler Nichols Tuning, which gives you a starting point for the values of these constants you wouldn't want to. You wouldn't want to rely on just that process, but it does give you a starting point for the constants in front of the components. So let's talk about tuning. So if we have a response that looks like this, what should we do Well in this scenario, we don't want it to ring like this.
We don't want it to have this this this period here. T A D We don't want it to have that. We want it to look like this. We've got our constants from the Zeigler Nichols tuning for and we want to improve it.
We want to. We want to make it closer to our shape. So what do we do? Well, Oscillations like this are caused by the component of the integral and the proportional component. And if the integral component has a dominant is a dominant component, then it's very likely that integral wind-up is causing this overshoot.
So let's try to reduce our integral component. Now, it's likely that you end up with something like this and that might not still be with your response. So you use notice that the integral component is no longer significant in the control effort when compared to the proportional component. So now let's reduce the proportional component.
Now that doesn't oscillate anymore, but we do want it to perform a little bit faster. So what do we do? Well, the integral component. Now we can change it and probably we can increase it. We know that our value of the integral component is somewhere between the constant that it was initially and after our tuning.
so let's take the average of those two values. Let's call them K1 plus K2 and that will be the new integral constant. and now we probably will get a little bit of overshoot. but now we've got that response time. Now we haven't really got any derivative component yet, so we now want to start working on that. We're now testing our disturbance rejection. so now we're testing the response to disturbances. So we start bumping it and we noticed that in the normal case, it decays really slowly.
We want it to behave like that first curve. so we increase the derivative component so that it's still relatively insignificant in the initial curve. But when there's huge rates of change like this, it has significance. It it is more significant than a proportional component.
So we increased the value maybe by a few percent, ten percent or something. and then we see how that responds ends up, and you'd probably end up with something like that. So you do it again. and now you notice that you end up with the response you want as you slowly iteratively improve the coefficients.
Now, each time you change one of these variables, the response, you're probably kind of wrecking your tuning for the other variables, so you have to retune those as well. And this process is an iterative process. You slowly improve it, and there's actually an absolute ton of methods of doing this. There's probably thousands of them there.
There are tons of methods to get the initial values, and I think Zeigler Nichols The Ultimate Cycle method is probably the easiest and I'm gonna show you how to use that in a moment. We know how to tune, but we don't know how to get a starting point. We what are we tuning from? Well, the way to get a starting point is to do the following. In most systems, it's accept more many systems.
It's acceptable to test the unit a fair bit before you end up having to use it. So what we do in this process is we have the unit with its with just a proportional controller. we have. Let's say, the oven.
We have the oven with just that proportional controller. So we have this this oven and we've got a digital controller sensing the temperature, sensing the temperature, and adjusting the oven accordingly. So what do we want to do? Well if we make the proportional component too large, then we end up with a resistant that does this and the response gets these ever-increasing sinusoids and this is very unstable. This is Um, this is the basic definition of an unstable system.
What we want to do is have it a stable sine wave so that the amplitude is roughly not changing. Actually, that that's that's the way normal zero Nichols has done. But I Think you should probably aim for something that is stable, but decay is extremely slowly. You're looking at the error and you want the error to have this just constant sinusoid.
Might seem a bit weird because that's exactly not what we want to happen, but this is how we do those. I Go Nichols Tuning. We start off with a constant for KP the proportional controller that just oscillates like this. Now when you have this constant, you can use the equations that many have come up with that use this value to create a starting point for your systems controller. From before, we have our KP value and when the KP is equal to the game that results in a stable sinusoid, then it is called Ke u. So Ke u is the gain at which you've got a stable sinusoid. Now, presumably you can measure the Sonya so it's period. I Assume you can measure the sinusoids period and that value is quite important because we're going to use it in this tuning method and that value is called Tu with Ke u and Tu You can use them in this equation here to get default values for the controller coefficients.
So zero point 6 K u times zero point six times. the value of K U is the coefficient before the proportional controller and you can use that value as a starting point. Tu / - that's the period divided by 2 is the coefficient for the the constant for the integral controller and the the period divided by 8 is the the constant in front of the derivative controller. So that actually gives us everything.
We now have an initial value for KP ki and KD because we have an initial value for KP, TI and TD and to tune the values all you do is expand this equation and then you can tune the values in front of the controller as if you had the controller before and it's much it's use easier to do the iterative improvement tuning on this equation that it is on this equation in my opinion. So after you get these TI values and TD values I would convert it to the form you see above here. That is simply by doing this you just the ki equals K P on T I and K D equals T d KD and that's it now I Am aware that control theory can be a little bit dry so this is just the start. This is the boring part of this.
What we have coming is a several inverted pendulum balancing robot which will just wander around the office balancing and this is going to be designed 3d printed and then we're going to use that desire go Nicholas tuning to just get it to balance in a empirical way. If anyone wants to know more about this even in the comments below. If anyone wants to know the theory, the mathematical, the theory behind this, and how you can derive controllers from Mats alone, leave in the comments below. vote up that comment and I'd be happy to do that.
So thanks! Okay, so I finished shooting my video I finished editing and then I find this beautiful plot on Wikipedia But it's worth it. This plot shows the effect of a step response on a system and the effect of changing the PID constants in front of the components. So the first thing they tune here is the proportional component and then the integral component, then the derivative component and the plot kind of shows the the change of the response of the system. Now this system.
This plot here is showing how a system responds to a step input. That is where the set point where the system should be is set to one and you kind of just view how that looks. So initially you notice about this system is that it has some steady-state error. That means that the system's response doesn't converge to the setpoint. and the way you deal with this is by increasing the integral component. They do this and they'd remove their steady-state error. Unfortunately, as they increase the integral and proportional component, overshoot of the system substantially increases the way they deal with. that.
here is by increasing the derivative component and because the derivative component resists change, its it hates change. It does help in reducing the oscillations as oscillations are of course change and overshoot is change. It really just wants the line to be flat and stationary. So I Hope this plot helped to add it in after rendering.
See ya you.
Thank you soo much for the effort !
but when the error becomes zero the controller P will output a zero signal and the plant will stop . then the error will be great again and this keeps on reapting .. i'm missing something . . in the I controller example ( the oven ) what do you mean by the effort and what is it exactly ?
is there a power stage that can be changed based on controller output ? when the controller outputs zero does it physically mean disconnecting the plant and the power stage ? if you can give me a real example that would be helpful .
We zero out the integral every xx seconds for controlling pressure.
Truly brilliant presentation. Relating the equations/terms to the graphs made it understandable to people like me.
IDIOT.
Thank you so much for making It
This is the best video ever
Applied Mathematical Engineering Physics is a rigorous exhaustive method and sure fire way of seeing understanding why and how any process works in theory as in the Critically Damped, Underdamped, Overdamped study of RLC circuit behavior of an electronic circuit. I am only an Electronics Engineering Technician and have worked in the Instrumentation and Control of small and large Thermal Vacuum Testing in a Space Laboratory environment including Vibration/Acoustics, Structural Testing for 17 years and Now I am retired, now I just want to learn and see the how and why this things happen in theory and on paper.
this is the first pid controller video i have watched that has mentioned filtering. my pid controller has a filter on it but i have it turned off and have been fine tuning the i and d and can't get it quite perfect, at least for my autistic brain. i have not tried the filtering because it's labelled as digital filtering, but I'm going to give it a whirl
Valuable, informative and helpful content๐. Thank you.
๐
If I can't get this explanation I can't get any other. Thank you very much!
Many thanks for this amazing video. Best wishes!
Best video about PID
Finally a simple PID controller explanation!
Best explanation ever! Thank you!
loved it, simple, clear, and to the point
DR RORPOPOR HERBAL on YouTube changed my entire life with his herbal medicine. I appreciate you sir, for taken away my PID ๐ฟ๐ฟ๐ฟ๐ฟ
good job easy to understand
Kd=Td*Kd ….. Anyone else tripped up by this? Like saying find the value of x when x=0.6x. Am I missing something?
where is the next part anyways? anyone with the link? #7 in the playlist is completely something else