FTC++
  • Home
  • 📷Vision
    • Introduction
    • AprilTags
    • Bitmaps
    • Custom Algorithms
  • 🔢Data
    • Introduction
    • Moving Average Filter
    • Kalman Filter
    • Sensor Fusion
    • Multi-Variable Kalman Filter
  • 🎮Control Theory
    • Introduction
    • State Space Control
    • Linear Quadratic Regulator
    • Model Predictive Control
    • Odometry
  • 🧊3D Modelling
    • Odometry pods
    • Sensors
    • Turrets
  • 📚Resources
    • Resources
Powered by GitBook
On this page
  1. Control Theory

Linear Quadratic Regulator

LQR

The Linear Quadratic Regulator (LQR) is a control method that can be used to create optimal controllers for linear time-invariant systems. It is a common control approach in a variety of engineering applications such as robotics, aerospace, and process control. LQR seeks control gains that minimize a performance criterion, which is commonly defined as the sum of the system's states and control inputs. The LQR problem can be formally stated using the following dynamics for a continuous-time linear system:

dx/dt=Ax+Budx/dt = Ax + Budx/dt=Ax+Bu

where:

  • x is the state vector representing the system's internal states.

  • u is the control input vector.

  • A is the state matrix, describing how the states evolve over time.

  • B is the input matrix, describing how control inputs affect the state evolution.

The goal of the LQR is to find an optimal control input u(t) that minimizes the following cost function:

J=∫[xTQx+uTRu]dtJ = ∫ [xᵀQx + uᵀRu] dtJ=∫[xTQx+uTRu]dt

where:

  • Q is the state weighting matrix, a positive semi-definite matrix that penalizes deviations from the desired state.

  • R is the control input weighting matrix, a positive definite matrix that penalizes large control inputs.

The LQR algorithm provides the optimal control gains, K, that lead to the optimal control input:

u(t)=−Kx(t)u(t) = -Kx(t)u(t)=−Kx(t)

The optimal control gains K can be obtained by solving the algebraic Riccati equation:

ATP+PA−PBR−1BT+Q=0ATP + PA - PBR⁻¹BT + Q = 0ATP+PA−PBR−1BT+Q=0

where P is a symmetric positive definite matrix, known as the state cost-to-go matrix, and is found by solving the Riccati equation. The control gains K can be derived from P and B:

K=R−1BTPK = R⁻¹BT PK=R−1BTP

pros:

  1. good control gains which lead to more performance

  2. you can use it for linear systems

  3. it makes systems more stable

cons:

  1. you need to know the system dynamics (A and B matrices) for design.

  2. It assumes a quadratic cost function, which may not be correct for all applications.

  3. It is designed for continuous-time systems and needs to be discretized for digital control.

In practice, LQR is often used in combination with state estimation techniques like Kalman filters to handle sensor noise and uncertainties in the state measurements. The LQR controller is a powerful tool for designing optimal control systems and is widely used in engineering and control applications.

In the FIRST Tech Challenge (FTC), the Linear Quadratic Regulator (LQR) can be used to design optimal controllers for the robot's motion. The goal is to make the robot move along a desired trajectory or maintain a specific state while minimizing control effort and error.

this is how it would probably work in ftc... and the steps

  1. System Modeling:

    • The first step is to model the robot's dynamics as a linear system. This involves determining the state variables (position, velocity, orientation, etc.) and the control inputs (motor powers, servo angles, etc.).

    • The system dynamics can be described in matrix form: dx/dt=Ax+Budx/dt = Ax + Budx/dt=Ax+Bu, where x is the state vector, u is the control input vector, A is the state matrix, and B is the input matrix.

  2. State and Control Input Definitions:

    • Define the state vector (x) that represents the robot's internal states, such as the position and velocity in the x, y, and theta directions.

    • Define the control input vector (u) that represents the motor powers or servo angles needed to control the robot's motion.

  3. Cost Function Formulation:

    • Define the cost function that needs to be minimized. The cost function typically includes terms that penalize deviations from the desired state and control efforts.

    • The cost function has the form: J=∫[xTQx+uTRu]dtJ = ∫ [xᵀQx + uᵀRu] dtJ=∫[xTQx+uTRu]dt, where Q is the state weighting matrix, and R is the control input weighting matrix.

  4. Tuning the Weighting Matrices:

    • The effectiveness of the LQR controller depends on the appropriate selection of Q and R matrices. Tuning these matrices allows the designer to prioritize the importance of different states and control inputs in the optimization process.

    • Higher values in Q matrix penalize deviations from the desired states more, while higher values in R matrix penalize large control efforts.

  5. Solving the Riccati Equation:

    • The next step is to solve the algebraic Riccati equation to find the state cost-to-go matrix (P).

    • The Riccati equation is ATP+PA−PBR−1BT+Q=0ATP + PA - PBR⁻¹BT + Q = 0ATP+PA−PBR−1BT+Q=0, where A, B, Q, and R are known from the system model.

    • The solution for P is a symmetric positive definite matrix.

  6. Calculating the Control Gains:

    • Once P is obtained, the optimal control gains (K) can be calculated as K=R−1BTPK = R⁻¹BT PK=R−1BTP.

    • These control gains will provide the desired control input (motor powers or servo angles) to drive the robot along the desired trajectory or maintain the desired state.

  7. Implementing the Controller:

    • Finally, the calculated control gains (K) are used to implement the LQR controller in the robot's code.

    • The controller takes sensor measurements to estimate the current state, calculates the control input using the LQR algorithm, and applies the control input to the motors or servos to drive the robot.

The main difference between Linear Quadratic Regulator (LQR) and other control systems lies in their approach to control design and optimization. which means how it works pros cons

  1. Control Objective:

    • LQR: LQR is an optimal control technique that aims to minimize a specific cost function representing the system's performance. It uses state feedback and calculates control gains to achieve the best trade-off between state tracking and control effort.

    • Other Control Systems: Other control systems, such as Proportional-Integral-Derivative (PID) controllers or simple feedback control, focus on achieving stability and steady-state performance. They adjust control signals based on the error between the desired and current states.

  2. Control Design:

    • LQR: LQR uses a mathematical optimization approach to find the optimal control gains based on the system's dynamics and a cost function. It requires a model of the system's behavior (state-space model) and tuning of weight matrices to prioritize state and control input penalties.

    • Other Control Systems: Other control systems are generally simpler and do not require a mathematical model of the system. They often involve manual tuning of control gains to achieve the desired response.

  3. Performance Optimization:

    • LQR: LQR aims to optimize the system's performance by minimizing a defined cost function. It considers both the state and control input penalties and computes control gains that yield the best overall performance.

    • Other Control Systems: Other control systems aim to stabilize the system and reduce errors. While they can be effective for many practical applications, they may not optimize performance as rigorously as LQR.

  4. Applicability and Complexity:

    • LQR: LQR is well-suited for continuous-time linear systems and is particularly effective for controlling dynamic systems with multiple states and control inputs. It is more complex to implement and requires a good understanding of system modeling and optimization techniques.

    • Other Control Systems: Other control systems like PID are simpler and can be applied to a wide range of systems, including both continuous and discrete-time systems. They are generally easier to implement and tune.

  5. Robustness:

    • LQR: LQR design is based on the assumption that the system's model is accurate. If the system deviates significantly from the model, the performance may degrade.

    • Other Control Systems: Some other control systems, like PID, are more robust to model uncertainties and may provide satisfactory performance even with some system variations.

PreviousState Space ControlNextModel Predictive Control

Last updated 1 year ago

🎮
👍