FTC++
  • Home
  • 📷Vision
    • Introduction
    • AprilTags
    • Bitmaps
    • Custom Algorithms
  • 🔢Data
    • Introduction
    • Moving Average Filter
    • Kalman Filter
    • Sensor Fusion
    • Multi-Variable Kalman Filter
  • 🎮Control Theory
    • Introduction
    • State Space Control
    • Linear Quadratic Regulator
    • Model Predictive Control
    • Odometry
  • 🧊3D Modelling
    • Odometry pods
    • Sensors
    • Turrets
  • 📚Resources
    • Resources
Powered by GitBook
On this page
  1. Control Theory

Model Predictive Control

MPC

Model Predictive Control (MPC) is an advanced control strategy used to optimize the performance of dynamic systems over a finite time horizon. It is especially well-suited for systems with constraints and non-linear dynamics. MPC uses a model of the system to predict its future behavior and compute optimal control actions that minimize a cost function.

The basic steps involved in Model Predictive Control are given below:

  1. System Model:

    • The first step in MPC is to create a mathematical model of the system to be controlled. This model describes the relationship between the system's current state, control inputs, and future state evolution over a prediction horizon. The model can be a continuous-time or discrete-time representation, depending on the system's dynamics.

  2. Prediction Horizon:

    • MPC operates over a finite prediction horizon, which is a future time horizon over which the control inputs are optimized. The prediction horizon is divided into a series of discrete time steps. The choice of prediction horizon depends on the system's dynamics and control objectives.

  3. Cost Function:

    • MPC uses a cost function that represents the system's performance objectives and constraints. The cost function typically includes penalties for tracking desired setpoints, minimizing control effort, and satisfying system constraints. The weights assigned to these terms influence the control strategy.

  4. Optimization:

    • At each time step, MPC performs an optimization to find the optimal sequence of control inputs over the prediction horizon that minimizes the cost function. The optimization takes into account the system model, prediction horizon, and current state information.

  5. Apply First Control Input:

    • After the optimization, MPC applies only the first control input from the computed sequence to the system. The remaining control inputs are discarded, and the optimization is repeated at the next time step with updated state information.

  6. Repeat:

    • The process is repeated at each time step, continuously updating the control inputs based on the current state and re-optimizing over the prediction horizon.

Advantages of Model Predictive Control:

  • MPC allows for the incorporation of constraints on system states and control inputs, making it suitable for systems with physical limitations or safety requirements.

  • It can handle non-linear system dynamics and handle disturbances effectively.

  • MPC provides a systematic way to trade-off control performance and actuator effort over a finite prediction horizon, leading to optimal control actions.

  • The control strategy can be adapted and modified by changing the cost function weights, making it flexible to various control objectives.

Challenges of Model Predictive Control:

  • MPC requires a good understanding of the system dynamics and accurate modeling, which can be challenging for complex and non-linear systems.

  • The computation required for the optimization can be intensive, especially for large prediction horizons and complex models. Real-time implementation may be challenging for certain systems.

MPC can be used in both auton and teleop!

  1. Path Following: During the auton, the robot often needs to follow a specific path to reach predefined locations or perform tasks. MPC can be used to generate optimal control inputs (motor powers or servo positions) to drive the robot along the desired path accurately and efficiently. The cost function in this case could include penalties for deviation from the path and control effort.

  2. Trajectory Tracking: FTC robots may need to track complex trajectories or follow moving targets. MPC can predict the robot's future state based on the current state and trajectory information and then optimize control inputs to minimize tracking errors and reach the target smoothly.

  3. Obstacle Avoidance: MPC can help the robot avoid obstacles or navigate through narrow paths. By incorporating obstacle detection and distance sensor inputs, the cost function can penalize collisions or getting too close to obstacles.

  4. Stabilization: In teleop, MPC can be used to stabilize the robot and improve driver control. It can help maintain the robot's orientation and position by considering the robot's dynamic model and applying control inputs to counteract disturbances or driver inputs.

  5. Optimal Task Execution: For tasks with multiple steps or actions, MPC can plan and execute the optimal sequence of actions to complete the task efficiently. For example, in a stacking task, MPC can optimize the movement and timing of the robot to pick up and place game elements accurately.

  6. Adaptive Control: MPC can adapt to changes in the environment or robot dynamics by re-optimizing control inputs at each time step. This adaptive capability is beneficial when the robot encounters uncertainties or variations in the field or game elements.

  7. Energy Efficiency: FTC robots often run on batteries, and energy efficiency is crucial for maximizing performance. MPC can optimize control inputs to minimize power consumption while still achieving the desired objectives.

While MPC can offer significant advantages in terms of optimization and performance, implementing it in FTC comes with some challenges:

  • Computation Time: MPC requires solving optimization problems at each time step, which can be computationally intensive. Real-time feasibility is essential, and care should be taken to ensure the control algorithm can run within the allowed time constraints.

  • System Modeling: Accurate modeling of the robot's dynamics is critical for MPC's success. Building a good model can be challenging, especially for complex robot mechanisms or non-linear behaviors.

  • Tuning Parameters: The effectiveness of MPC heavily relies on properly tuning the cost function and prediction horizon for the specific task and robot configuration. Extensive testing and fine-tuning are often necessary.

  • Sensor Noise and Delays: Sensor noise and communication delays can affect MPC's performance. Techniques such as Kalman filtering or sensor fusion may be needed to improve sensor data reliability.

Overall, Model Predictive Control can be a powerful tool for enhancing the control and performance of FTC robots, especially for tasks that involve complex trajectories, constraints, and optimization objectives. However, its successful implementation requires careful attention to modeling, tuning, and real-time constraints.

Model Predictive Control (MPC) uses a predictive model of the system to make control decisions over a finite time horizon. The goal is to optimize a cost function that represents the desired system behavior and constraints while taking into account the dynamics of the system.

The main steps in MPC involve:

  1. Modeling: MPC requires a dynamic model that represents the system's behavior over time. This model can be a mathematical representation of the robot's motion, including its state variables (position, velocity, orientation, etc.), control inputs (motor powers, servo positions), and dynamics (kinematics and dynamics equations). The model describes how the state variables evolve over time in response to control inputs.

  2. Prediction: At each time step, the current state of the system is measured using sensors, and the MPC predicts the future state by applying the control inputs over a finite prediction horizon. The prediction horizon is a sequence of future time steps over which the system behavior is predicted.

  3. Cost Function: The MPC uses a cost function that quantifies the desired behavior and objectives of the system. The cost function includes terms that penalize deviation from the desired trajectory, control effort, and other constraints. The objective is to minimize the cost function while satisfying the system's constraints.

  4. Optimization: The control inputs over the prediction horizon are optimized to minimize the cost function. This optimization process can be solved using numerical optimization techniques, such as quadratic programming or nonlinear programming. The optimized control inputs are then applied to the system for the next time step.

  5. Receding Horizon Control: MPC is a receding horizon control strategy, meaning that at each time step, only the first control input of the optimized sequence is applied to the system. As time progresses, the optimization is repeated at each time step, using updated measurements and predictions.

Mathematically, the MPC control law can be expressed as follows:

Given:

  • The dynamic model of the system: x(t+1)=f(x(t),u(t))x(t+1) = f(x(t), u(t))x(t+1)=f(x(t),u(t))

  • The cost function: J=Σ[cost(x(t),u(t))]J = Σ [cost(x(t), u(t))]J=Σ[cost(x(t),u(t))]

  • System state at time t: x(t)

  • Control input at time t: u(t)

The goal is to find the optimal control input sequence u(t),u(t+1),...,u(t+N−1){u(t), u(t+1), ..., u(t+N-1)}u(t),u(t+1),...,u(t+N−1) that minimizes the cost function subject to constraints and system dynamics over the prediction horizon N.

The optimization problem can be formulated as:

minimize J(u(t),u(t+1),...,u(t+N−1))J(u(t), u(t+1), ..., u(t+N-1))J(u(t),u(t+1),...,u(t+N−1))

subject to:

  • x(t+1)=f(x(t),u(t))x(t+1) = f(x(t), u(t))x(t+1)=f(x(t),u(t))

  • x(t+2)=f(x(t+1),u(t+1))x(t+2) = f(x(t+1), u(t+1))x(t+2)=f(x(t+1),u(t+1))

  • ...

  • x(t+N)=f(x(t+N−1),u(t+N−1))x(t+N) = f(x(t+N-1), u(t+N-1))x(t+N)=f(x(t+N−1),u(t+N−1))

  • Constraints on control inputs: u_min ≤ u(t) ≤ u_max

  • Constraints on system states: x_min ≤ x(t) ≤ x_max

  • Terminal constraints: x(t+N) = x_desired (desired final state)

  • Terminal cost: J_terminal = cost_terminal(x(t+N)) (cost function at the terminal state)

The optimization problem is solved iteratively at each time step, and only the first control input u(t) is applied to the system. The process is then repeated for the next time step with updated measurements and predictions.

MPC provides a powerful control framework that can handle complex systems with constraints and optimize control inputs in real-time. It has applications in various fields, including robotics, industrial automation, and autonomous vehicles.

PreviousLinear Quadratic RegulatorNextOdometry

Last updated 1 year ago

🎮